The fundamentals of writing a framework

In software engineering, a framework is an abstraction in which software that provides generic functionality can be modified by implementation-specific code in order to result in application-specific software.

So SOLID

A good framework adheres to the SOLID principles of object-oriented design. These principles are:

  1. Single responsibility principle. A class should do just one thing (and do it well).
  2. Open/close principle. A class should be open for extension but be closed for modification.
  3. Liskov substitution principle. Objects in a program should be replaceable with instances of their subtypes without breaking the application. See: polymorphism.
  4. Interface segregation principle. Many client-specific interfaces are better than fewer general-purpose interfaces.
  5. Dependency inversion principle. Depend upon abstractions, not concretions. See: interfaces or protocols.

These principles are mostly self-explanatory, though I think the open/close one is a little more ambiguous in that deciding to which degree a class should be open or closed is often a matter of circumstance and of opinion.

Personally, when dealing with the access modifiers such as private, protected, internal or public (which may be named differently or which may not even exist at all depending on the language in question), I make very few methods and properties private to the class itself because it’s impossible to be certain that they won’t need to be accessible or be extended in a subclass at some point down the line. So by default I tend to use whichever access modifier makes the method or property accessible to the class that defines it as well as to its descendants – but which prevents access from outside.

Of course there are occasions when private is more appropriate. But few things are more infuriating than trying to extend a supposedly extensible framework class and discovering that a method or property that is required for this new functionality is inaccessible. Deciding between duplicating the inaccessible functionality or forcing access to it with a hack is unpleasant and shouldn’t be necessary.

One other notable point about the SOLID principles is that, in general, they are numbered according to their importance and to the impact that they will have on a codebase. That is, with every principle that is adhered to, each subsequent principle becomes increasingly easy to adhere to; similarly, if any principle is not adhered to then subsequent principles will become increasingly difficult to adhere to.

With that in mind, it is imperative to have ‘single responsibility’ at the forefront of your mind when designing your framework, because if that principle is adhered to correctly then the others almost fall into place by themselves.

When it all goes wrong

Recently I was working on a project that was based on a framework that was developed by an internal team. This framework had been extracted from a previous product which is always a tricky proposition, but if the SOLID principles above are adhered to then it is possible to extract a usable framework in this way.

However, what was painfully clear on this project was that these principles had not been adhered to at all: classes were often enormous with multiple responsibilities; they were difficult to extend with the methods and properties that would be useful to a subclass often being private; subclasses would often disregard their supertypes’ contracts; and interfaces were like gold dust with the vast majority of the framework written for concrete implementations.

On top of all that, the framework also contained lots of implementation-specific code that the framework team must have felt was so intrinsic to the company’s applications that having it written into the framework was acceptable. But in the real world where a client’s prerogative is to make unexpected (and sometimes illogical!) change requests, the result was that the application was sometimes forced to extend classes that performed unwanted work, only to have to undo that work itself immediately afterwards. Needless to say, this was wasteful.

In some cases, the framework’s classes performed so much unwanted work that – in the interest of application performance – it was actually necessary to exploit the order in which packages were loaded by the compiler to override the framework’s class with a local application version, just so that all that unnecessary work could be stripped out. And yes, this also meant that any of the framework class’ code that we wanted to maintain had to be duplicated. Why not simply substitute the class? Because as mentioned above, everything was written for concrete implementations.

Unsurprisingly, this framework felt more like a monster that had to be wrestled with than a helpful tool. The development of every new application feature was tedious, laborious, frustrating and time-consuming. But a rewrite wasn’t possible because the framework was too large and was used by too many applications.

Keep it simple

As well as adhering to the above SOLID principles, it is important to remember that the whole point of a framework is to provide an abstract foundation onto which an application developer can build their application-specific software with their implementation-specific code: a framework should therefore NEVER contain implementation-specific code itself!

This is not to say that a framework developer is prohibited from providing their application developer colleagues with functionality that is common to their company’s applications – they may indeed do so, but the correct approach is not by polluting the framework with it because that should remain abstract: instead, this common functionality should be placed in a separate package, allowing the application developer to use that functionality if they want to and not because they have to.

The 4 Good Things

The BBC’s Future Media department promotes the use of four key principles across its engineering teams, known as “The 4 Good Things”.

The interpretation of these principles tends to vary a little across each team, but generally speaking they provide a consistent approach to software development across a number of different projects, platforms and languages.

Here are those four principles along with my personal interpretations.

1. Meaningful Code Reviews

  • Code should be written in pairs whenever possible, or reviewed by someone else at the earliest opportunity when it’s not
  • Only code successfully reviewed should be merged to trunk
  • Traceability is recorded (who did the code review and who wrote the code)

2. Developer Accountability for Non-Functional Requirements

  • Operational considerations (i.e. NFRs) are part of the definition of ‘done’
  • NFRs should be discussed with stakeholders in depth
  • A feature/story isn’t complete until the team is confident that it can be managed in production and perform well under foreseeable load and circumstances
  • Developers are accountable for monitoring the health of the product

3. Real Continuous Integration

  • Smoke tests that are automatically run on each commit (and at least daily) in order to confirm that the latest commit has not fundamentally broken the build and/or the world
  • Tests must be in a shared environment so that they cover changes made by all developers, and because “it works on my machine” is not the point
  • Tests are run against trunk

4. Automated Acceptance Tests

  • NFRs are explicitly stated in the stories wherever possible to help ensure the definition of ‘done’ is met
  • Acceptance tests should include all of the functional tests
  • Acceptance tests should run on the TEST environment

The Pied Piper of JavaScript

For a couple of years now, JavaScript has been encroaching into territory traditionally held by Flash.

At the BBC’s TVMP department, news of set top boxes making the switch to JavaScript seems to be an almost daily occurrence as manufacturers fall over themselves to jump onto the JavaScript bandwagon for alleged “faster development” and the mythical “build once, deploy everywhere” utopian ideal.

I’ve witnessed the repetition of such claims myself. “We’ll be switching this product to JavaScript in the near future because – obviously – it’s faster” a product owner once said, followed by the nodding heads and murmurs of agreement of other non-developers.

The reality for those of us who have not succumbed to the merry tune of the JavaScript Pied Piper’s flute is that “build once, deploy everywhere” is nonsense as far as JavaScript is concerned and the platform simply isn’t mature enough to competently support apps like this without extensive – and in terms of time, expensive – hacks and workarounds.

Time is being spent filling the gaps of missing libraries and frameworks by converting codebases from other languages – including Flash.

The Flash version of the Sports app that delivered the Olympics last year was released in time for the F1 season in March, while the JavaScript version missed that deadline and was instead launched for Euro 2012 a few weeks later. A few weeks is no big deal of course, except that the JavaScript version was started four months before the Flash version in the summer of 2011.

True, the JavaScript version targeted more devices: a half dozen TVs compared to the Flash version’s single target platform of TiVo. But using the number of target devices as a reason for delay is contrary to the “build once, deploy everywhere” mantra.

Ironically, it’s the Flash version that lives up to the claim because although it was developed solely with the TiVo in mind, it runs perfectly without modification on a number of other devices including Western Digital and Popcorn set top boxes.

The Events app on which I was lead developer was developed from scratch in Flash in less than four months. It exceeded expectations by achieving not only the MVP (minimum viable product) but a few stretch goals as well, and even had a week to spare at the end. A JavaScript version wasn’t even attempted because in a somewhat surreal situation, the same management who echo-chamber to each other that JavaScript development is faster and easier were also in agreement that there wasn’t enough time to build a JavaScript version.

The first few converted devices have started to make their way into the office and are now running JavaScript versions of the Flash apps they once ran. Performance on the new JavaScript versions is poor, with product owners even electing to disable animation in an attempt to maintain an acceptable level of responsiveness. All this hard work in switching platforms so that we can deliver exactly the same features but with a fraction of the performance… 😬

The Halo Framework (AS2)

Before work on the BBC Events application could begin, a framework was required that would provide the team with a foundation on which to build a robust, memory-efficient app that could be re-skinned for future events.

I completed the majority of the work in an intensive 3 days which allowed the rest of the team to begin work on the app itself. Embellishments were applied over the remainder of the week.

The core framework is designed around the BBC’s One Service UI which is a drive to use a consistent UX across all current and future products.

It allows for efficient use of specialised components which are created and destroyed at runtime based on what the application requires to render each feed. Components are owned by sections, sections are owned by scenes and a component can both be shared across- or destroyed between- sections as required for maximum efficiency.

The components themselves are based on an MVC pattern and are encapsulated by a component class and interface through which they communicate with the main application. This design enforces a consistent approach to developing additional components – as well as application structure as a whole – and means it will be straightforward to add new components to the framework in the future.

Halo allowed us to create the Events app in record time – less than 4 months from start to finish which is approximately 30% less time than was spent on the Sports app last year.

The Xenon Embedded Media Player

At the end of November last year the BBC launched the Connected Red Button service on Virgin TiVo, along with updated versions of the iPlayer, News and Sport apps. This package also consisted of a fifth release: an application that is both integral and vital to each of the above apps, yet few are even aware of its existence!

Xenon is a brand new embedded media player that was developed specifically for AS2 embedded devices such as Virgin’s TiVo, to be used by applications like iPlayer, News, Sport and any other such applications that the BBC develops in future.

The development of a new player was actually driven by Connected Red Button. The CRB project was subject to some aggressive performance requirements that were going to be difficult to achieve even with an ideal setup on the modest TiVo hardware, but impossible with the existing media player which had been built years before to a different set of requirements. If we were going to stand a chance of meeting CRB’s performance requirements then we’d need to build a new player specifically for this purpose.

A new beginning

Xenon is based on an MVCS pattern with separate services performing the important tasks of reading playlists, parsing Media Selector responses, loading subtitles and streaming the media itself, while dedicated views handle the presentation to the user. It can be used both by standalone applications and those that are based on our AS2 Krypton Framework: the benefit of the latter being that the host application can then take advantage of the framework’s underlying BDD functionality for automated tests.

Inbound communications are provided by an API that the host application (i.e. iPlayer) can use to control everything from the desired media bitrate to subtitle preferences in real-time, while an events system to which the host can subscribe is used for all outbound communications. This events system provides the host application with around 70 different events, though only 20 of them are required for “normal” use: the others are either used for debugging, were included in order to support possible future requirements or are there simply to provide the host with more specific information on given scenarios.

The project is covered by around 200 unit tests which not only provide confidence in the existing code but will also provide future engineers with a means to test any additions or refactors.

So what’s new?

Viewers should notice a significant improvement in performance, some of which are covered by the following statistics:

  • Media load times have been halved
  • Video scrub times are now 3-4 times faster
  • Toggling between SD and HD is now 3-4 times faster
  • Subtitles load times are improved tenfold down from 30-40 seconds to just 3-4 seconds.

In order to achieve this kind of performance we had to employ a number of tricks!

One of the things we looked at was the process by which the old player would select and then connect to a server. We were interested in improving this because we’d noticed that there was a noticeable delay between getting the list of stream locations from Media Selector and the start of playback.

It turns out that the old player would make multiple connection requests to its chosen location using different ports – 1935, 80 and 443 – which of course are the standard FMS ports. When one connection responded favourably the other two were closed and the stream was requested from the open one.

Trying to avoid the delays associated with working through a list of ports in series by trying them all in parallel would be worthwhile if each port had a shared chance of success, and on a platform like the PC which operates on different types of networks with different routers and firewalls that’s most probably the case.

On TiVo however, unless there’s a serious problem with the chosen content delivery network (CDN) the 1935 connection will work 100% of the time and if it doesn’t work then we’re better off trying a different CDN than the same one again on a different port.

So that’s what Xenon does: if the single connection request on port 1935 to its first CDN fails to connect then it moves on to the next CDN and sends another single connection request to that on the same port. Since each CDN has a lots fail-over and redundancy, if the user is unable to connect to a single CDN then either their internet connection is down or they have bigger problems!

This approach means the box only has to allocate enough resources to create and connect a single NetConnection which means more resources are available elsewhere to help deliver a more responsive user experience.

Another thing we looked at was subtitles. The old player took 40-60 seconds to resume playback after subtitles were enabled and we really wanted to bring this figure down.

When we looked at how subtitles were being handled in the old player we discovered that typed objects with verified data were being created as soon as the file was downloaded and parsed. A 60-minute program with lots of talking might contain hundreds of lines of subtitles which would require a fair amount of work to get through. Again such an approach makes sense on a PC but not so much on a set top box.

The way we got round this in Xenon was to download and parse the XML file as before, but to create only basic reference objects at first without typing or validation and to defer such calculations until the reference object was relevant to the stream based on the current playback position. At this point the object’s data is verified and converted into a more usable format before the subtitle is displayed on-screen.

This approach means a little more work is done by the player for each individual subtitle as playback progresses, but it also means we can spread the necessary calculations over the duration of the program rather than doing them all up-front which allows us to resume playback in just 3-4 seconds.

What it means for us

Some specifications and supported features are as follows:

  • The project weighs in at around 6,000 lines
  • Both audio and video streams are supported, both live and on-demand
  • Akamai, Limelight and Level3 content delivery networks (CDNs) are supported
  • Dynamic CDN switching and ABR1 bitrate downswitching are both supported
  • Dynamic weighting and prioritisation is supported, with appropriate data supplied by Media Selector 5

The new player also provides a number of benefits for our development teams:

  • An API that allows direct control of every aspect of the player at runtime
  • A comprehensive and consistent events system which makes debugging quick and easy
  • A complimentary example application that both provides new teams with an implementation template and a convenient way for us to diagnose issues with the live service
  • The codebase is extensively covered by tests which provides confidence in the robustness of the code.

Although invisible to the average user, Xenon has received favourable feedback from the public with comments such as:

I have found since BBC released the Connected Red Button on TiVo, BBC iPlayer has worked perfectly. I have watched loads on there over Christmas mainly in HD and I’ve no stuttering of the picture or any exiting from the program. Also it pauses and restarts without exiting back to menu. And if you press the forward arrow slowly between each press you can jump in 1 minute segments when fast forwarding so it is pretty easy to get to a certain part of the program. Now that the resume works with SD and HD programmes you can also leave and come back to where you left off earlier. I used to have all the problems like what most others have reported but now I can use BBC iPlayer at any time (including peak time) with no problems.
– spj20016, Digital Spy Forum

I managed to watch the whole of Supersized Earth in HD last night on Tivo iPlayer without a single blip, stutter or buffering. Something has improved.
– m0j00, Virgin Media Support Forum

Even the pause button works without it throwing you back to the beginning.
– Markynotts, Digital Spy Forum

Review: SoftPerfect Connection Emulator

Company: SoftPerfect
Product: Connection Emulator
Price: From $149.00

While working on the BBC Sports app for Virgin TiVo, it would be safe to say that I encountered more problems than one would expect to on such a project. Some of them seemed to be bugs in Adobe’s StageCraft (Flash Lite 3.1), others seemed to be inconsistencies in the implementation of the Flash player on the box and still others came from unexpected content delivery network behaviour. As a result there were a number of feature-related tasks that turned out to be more difficult to implement than they perhaps should have been, and as a team we spent a number of days in total investigating causes of unexpected and/or undesirable behaviour in the app that resulted from these issues.

Still, this post isn’t about reflecting on the highs and lows of development or about detailing the inner workings of Virgin Media‘s flagship device: rather, it’s about a great little tool that I found while trying to diagnose one of the aforementioned problems: Connection Emulator by SoftPerfect.

The Euro 2012 tournament was about to begin and an updated version of the app was released to support it. During the opening match we received reports of stuttering video playback which we were able to replicate ourselves on some of the test devices in the office. What we were seeing came as quite a surprise because the first version of the app which was released for the start of the F1 season exhibited no such problems and played back its content without issue. All of the test content we had used during development had also played as intended. What was different about these new football streams that could cause such symptoms?

On the second or third day I was asked to take a look at the issue after different builds with different settings had failed to resolve it, and at the same time I was provided with some useful information including figures for the CDN (content delivery network) that was supplying the content. Some aspects of those figures were a little concerning and I wanted to run some tests on the TiVo under similar conditions to see how it responded.

I already had a tool that could simulate a limit in bandwidth, but in order to carry out the tests that I had in mind I needed something that could also simulate latency and packet loss. I develop on Windows because FlashDevelop (the greatest Flash IDE in the world) is only available for that OS, so Mac OSX’s built-in network simulation tools weren’t an option. After a few minutes on Google I discovered SoftPerfect’s Connection Emulator which claimed to offer precisely these features and so I downloaded the trial and took it for a spin.

Immediately the application instils confidence that you’re using a professional tool. The interface is well designed with separate tabs for each aspect of network simulation: transfer rate, latency, packet loss, duplication and reordering. The respective options on each tab are comprehensive and each option can be enabled and adjusted separately during or between tests. A graph at the bottom of the window gives a visual indication of the effect that your current settings are having on your machine’s throughput, with more precise statistics available in a separate pane.

Using the application I was able to test the TiVo under a variety of conditions with varying latency and packet loss, and along with throughput calculators (see here and here) was able to show conclusively why the football streams were stuttering: despite a dedicated connection of up to 10mbit on Virgin’s network, the TiVo simply wasn’t getting enough throughput from the CDNs because latency and packet loss were too high.

As one might imagine with stakeholders as large as projects like this often demand, in order to bring about a swift configuration change you need to be armed with significant evidence: vital settings will not be changed on the strength of gut feeling alone! Thanks to Connection Emulator I was able to provide that evidence in abundance which led to the necessary changes being made to deliver the content within the required parameters, and the rest of the tournament passed without any further problems.

One “gotcha” I discovered while using the tool however was that VirtualBox didn’t seem to like it one bit, blue-screening the host machine a few times before I was able to disable it. Switching over to VMware resolved the issue which would suggest that this is a problem with VirtualBox rather than with Connection Emulator itself, but I thought it worth mentioning anyway in case anyone else experiences the same issue.

VirtualBox issue aside, this is a great tool that performs its specialist tasks very well and very reliably, and it comes highly recommended to Windows-based developers who need to test their applications under varying network conditions.

BBC Sport app launches on connected TVs and Virgin TiVo

Just over four months have passed now since I started contracting at the BBC – four months that have flown by like days!

Today the primary purpose of my contract was released into the wild: the BBC Sports F1 application for Virgin TiVo. The public release of this application means that I am finally allowed to talk about it which is a great relief because not being able to talk about such an exciting project was really quite difficult!

The application enables users to watch all the BBC’s interactive coverage for major sporting events such as Formula One, Wimbledon, Euro 2012 and London 2012 with live streams, on-demand video and other additional content.

More information can be found on the BBC’s internet blog.

Finally, if a picture is worth a thousand words then a video must be worth a thousand pictures. With that in mind, here’s the official introduction video!

Interview with Ezra Dreisbach of Lobotomy Software

Back in the mid-late ’90s, Sega’s 32-bit Saturn was in the process of losing ground to Sony’s PlayStation, mostly due to a series of stupid decisions from Sega themselves. From hurriedly throwing together a machine that was incredibly difficult to program to asking for £400 for it on release (equivalent to between £600 and £707 today), Sega seemed pretty determined to make the Saturn an unattractive proposition for both developers and consumers alike.

As a result of being both difficult to program and the subject of a much smaller user-base, the Saturn was often the recipient of low quality, rushed games that looked (and sometimes played) terribly compared to PlayStation equivalents. Yes, there were obviously a number of greats on the Saturn that, in my opinion, eclipsed much of what the PlayStation had to offer, but I’m not talking about exceptional cases here – I’m talking about the way things were in general.

Sometimes this was down to developers porting over PlayStation code with minimal effort which meant no optimisation (for instance, Acclaim’s Alien Trilogy only used one of the Saturn’s two 32-bit CPUs) and sometimes it was simply down to developers not being skilled enough to get the most out of the hardware.

Sega’s non-existent software libraries meant that writing code in Assembly would yield speed increases of 300-500% over code written in C, but few publishers and developers were willing to spend the time – or the money – to do this. Yu Suzuki himself estimated that only 1% of the industry’s programmers would be skilled enough to get the most out of the Saturn, which compared to the high percentage of developers who could easily get things done on the PlayStation thanks to Sony’s comprehensive C libraries, just wasn’t good enough.

However, while most developers and publishers were happy to release sub-standard crap on the Saturn, there were a few who were willing – or maybe more importantly able – to achieve impressive results on the machine, sometimes even achieving things that were not possible on the PlayStation.

One such company was Lobotomy Software. I remember reading a preview of their first Saturn title, Exhumed, and being overjoyed that finally someone was putting some proper effort into a Saturn FPS. Deadalus, Doom and Alien Trilogy before it had all been horrendous, so I was really looking forward to playing what looked like the console’s first proper FPS game.

I pre-ordered the game and remember being late for school the day it arrived as I had been unable to wait ’til later to try it out. Over the next few months I completed the game several times over and unlocked every single secret, earning the in-game ability to levitate and even fly. The game was just awesome.

On the success of this game (at least technically if not commercially – this was the Saturn after all), Lobotomy Software was commissioned by Sega to also convert Quake and Duke Nukem 3D to the Saturn; both of which also turned out to be favourites of mine.

The brains behind the SlaveDriver engine that powered all three games and put every other Saturn FPS to shame was a guy called Ezra Dreisbach, and although I’d never met him or even seen his photo, I had some serious respect both for him and what he had achieved where so many others had failed.

Here’s Digital Foundry’s retrospective:

I’ve read five interviews with Ezra that relate to that period; a cheesy, biased, pro-Sega one conducted by SSM (and now available on UK:Resistance)a somewhat brief one by GameFan Magazine, a much more recent one on Eurogamer and one that I found on www.curmudgeongamer.com a few years ago that is sadly no longer online. Luckily though I had made a backup of the interview before it was taken offline so I’m able to post it here for posterity as a thanks for all the hard work that was put into making those 3 games the best console FPSs of that generation. Enjoy!

Contributed by: jvm

Back when the Saturn had reached its apex in the US market, I had just obtained a used one and several games and had done some research on USENET for which games I should investigate. Among the games that seemed to be highly acclaimed were three by the company Lobotomy Software. Those titles were all first-person shooters: Quake, Powerslave, and Duke Nukem 3D. The last of these even had the functionality to play over the Sega Netlink modem network device. I bought all three and enjoyed them immensely. As it turns out, I was able to track down Ezra Dreisbach, the lead programmer on Powerslave and actually got to ask questions. Ezra now works at Snowblind Studios where he worked on Baldur’s Gate: Dark Alliance. Here’s the result of that communication.

Matt: You were the lead programmer on Powerslave for the Saturn by Lobotomy, but also on the team for the Saturn ports of Duke Nukem 3D and Quake. Did those all use the same engine?

Ezra: Yeah, they were all based on the Powerslave Saturn engine. It was on the strength of that engine that we were able to get the contract for Duke Nukem and Quake from Sega.

Matt: What were your contributions to that engine? What were your roles on the other games that used it?

Ezra: I was the only programmer on Saturn Powerslave, but after we got the Sega contracts our whole company started working exclusively on those two projects and I moved into more just doing the core game engine work to support them.

Matt: Powerslave and Duke Nukem 3D on the PC both used Ken Silverman’s BUILD engine. Was the engine you designed for the Saturn a port the BUILD engine?

Ezra: Both games were pretty much rebuilt from the ground up. There is no shared code at all.

Those games work very differently from the way that things need to work on the Saturn, so there is really no way to do a port other than to basically remake the game. Doing ports isn’t the most financially or personally rewarding work. So there is no way that we would have wanted to do these if we hadn’t already known how to make Saturn first person shooters.

Matt: What, besides data like textures and models, was carried over from the PC versions? How about porting Quake?

Ezra: For Quake, all the levels were rebuilt by hand using our in house tool “Brew”. For Duke, we had a way to import the level data into Brew, but it still required substantial reworking.

Matt: What kind of system did Brew run on? I presume a PC, but then I’m not aware that I’ve ever heard a Saturn dev kit described before.

Ezra: It ran in Windows. The original idea was that it would be a tool that Lobotomy could use to create first person shooter levels for many games. We used it for Powerslave (Saturn & PlayStation), Mortificator (PC, unreleased) and the Quake and Duke ports.

Matt: You were a member of the “Design Team” for the PlayStation version of Powerslave. Does that mean you were a programmer, or did you fill some other role?

Ezra: No, it doesn’t mean programmer. On a project with so few people, everyone who works on it does some of the design. For instance, I designed some of the boss behaviour.

Matt: How did you feel about the two platforms, Saturn and PlayStation?

Ezra: I did do some work on the PlayStation later. After Saturn Quake was done I did a quick port of it to the PlayStation. Lobotomy was really hurting for cash at that point, and I hoped that we could get some publisher to sign us up to do PlayStation Quake. But for some reason, we couldn’t get anyone to go for it. Lobotomy folded soon after.

Matt: A PlayStation port of Quake? That’s terribly interesting! I’ve wanted a version of Quake on the PlayStation so I could compare versions on all three of the consoles from that “generation”. If you’ve the inclination, I’d truly like to hear how the port turned out on the PlayStation hardware, compared to the Saturn and (if you’ve seen it) the N64 version.

Ezra: The most striking thing about the PlayStation port was how much faster the graphics hardware was than the Saturn. The initial scene after you just start the game is pretty complex. I think it ran 20 fps on the Saturn version. On the PlayStation it ran 30, but the actual rendering part could have been going 60 if the CPU calculations weren’t holding it up. I don’t know if it would have ever been possible to get it to really run 60, but at least there was the potential.

Other than that, it would have looked identical to the Saturn version. Except for some reason the PlayStation video output has better colour than the Saturn’s.

So I know something about the PlayStation. And really, if you couldn’t tell from the games, the PlayStation is way better than the Saturn. It’s way simpler and way faster. There are a lot of things about the Saturn that are totally dumb. Chief among these is that you can’t draw triangles, only quadrilaterals.

Matt: I think I’ve seen an example of this in Tomb Raider on the Saturn. Very early on, in the caves, you can find a rock with a triangular side. In the PlayStation version, a rectangular texture was cut down the diagonal and mapped onto that triangle. In the Saturn version they had mapped the entire rectangular texture into the triangle, reducing one side to a point (in the sense that a triangle is a degenerate quadrilateral with one side of length zero).

Ezra: Ha! That’s pretty weak. What you do if you’re really trying is you pre-undistort the texture so that when you pinch one side down like that you end up getting what you wanted. We had to do this for the monster models in Saturn Quake.

Matt: Do you recall some of the internal differences between the Saturn and PlayStation versions of Powerslave?

Ezra: If you find all the team dolls in the Saturn version, then you get to play Death Tank. I’m not sure what you get in the PlayStation version. Jeff [Blazier] (the programmer of the PlayStation version) was working on a DT-like multiplayer minigame based on asteroids, but I don’t think he put it in the final game.

There are laser wall shooters in the Saturn version, but not in the PlayStation. It was a long time ago. There are plenty of differences, but I don’t remember any more major ones.

You can play a more advanced version of Death Tank if you’ve got Saturn Quake and Saturn Duke. Just boot up Quake so that it makes its save game, then start up Duke and a Death Tank option appears in the main menu.

Matt: Who designed the four exclusive levels for Saturn Quake? And while we’re talking Quake levels, what happened to one of the most memorable secret levels in the original Quake, Ziggurat Vertigo? Was it just too much wide open space for the engine to handle? Or were there other reasons for leaving it out?

Ezra: Yeah, exactly. That level was way too open to run well on the Saturn. One of the main problems with both the Quake and the Duke ports was that, on the Saturn, you can’t just draw a huge flat wall as one huge flat polygon. For one thing there’s no perspective correction, and some other limitations prevent you from even trying to work around that problem by dynamically subdividing the walls. So a flat wall has to be drawn as a mesh of quads. This means that huge walls have to be a lot of polygons, so huge open areas just can’t work. One of the Duke Nukem secret levels had to be replaced for the same reason.

The exclusive secret levels were designed by the whole Quake team. They were actually built by the Quake Saturn level designer, Paul Knutzen, who I’m happy to again be working with on Snowblind’s new project.

Matt: One of my blogs gives a quick amateur comparison of the Saturn and N64 versions of Quake. Any comments?

Ezra: I like this part:

“The next part is even more disappointing for the N64 port. Many of you may recall the three switches that light up as you descend a spiral ramp down to a pool of sludge. In the N64 version, the lighting is almost completely static in this section. Apparently adding coloured lighting to sections of the game is easy, but the addition of dramatic dynamic lighting is too hard to do. But wait… Lobotomy managed to pull it off on the Saturn. Crazy.”

I remember being really grumpy about implementing the dynamic-world lights like the three switches in this area. I’m glad someone appreciated it.

Matt: Do you generally like first person shooters? Or was the work on Saturn shooters a business decision, given the popularity of the genre?

Ezra: Yeah, I like first person shooters, Halo was my favourite game last year. But at that point, what I wanted to do didn’t really have anything to do with what Lobotomy decided to do. I was hired to work on Saturn Powerslave, so the decision to do that game was made way before I got there. And even after that I didn’t get much say it what we were going to work on. Not that we had much choice, people weren’t exactly lining up around the block to offer us work.

Matt: What others kinds of games do you play, in your spare time?

Ezra: I’ve already played a ton of games, so I like games that are not ordinary. In the past year, I liked Halo, Rez, Ico and Jet Set Radio Future.

Matt: Porting a game to a platform is said to be far less rewarding than creating a new game, tailored for a specific platform. If you could return to the days of Lobotomy, with the experience you have now, would you have done anything differently?

Ezra: As an independent game developer there’s always a big difference between what you want to do, and what a publisher is willing to fund you for. So usually you end up doing stuff that’s lamer than you’d like. Nothing you can do about it really.

Matt: Any plans for a Death Tank Drei hidden in any of your games?

Ezra: No. I would like to make a stand-alone DT game someday though.

Matt: Thanks for taking the time to share your answers with me. And, as I’ve said before, thanks for the work on Powerslave, Quake, and Duke Nukem 3D… I know I enjoyed playing all three of them.

Cyclomatic complexity

Cyclomatic complexity is a software metric used to measure the complexity of code. Specifically, it directly measures the number of linearly independent paths through a method. Although not a rule, generally speaking the quality of the code can be inversely proportional to the cyclomatic complexity value, so the lower the score, the higher quality the code.

One of the best programs that I’ve found for quantifying this metric is SourceMonitor. This handy little freeware tool will give you the cyclomatic complexity of your classes as well as the individual methods within those classes so you can see exactly what requires attention in the event of a high score. It also gives other feedback too, such as number of lines of code and the number of methods per class among others.

Why should you care what cyclomatic complexity value your code gets? Well, apart from the warm glow you get inside from knowing that your code is well-written and elegant, there is a practical reason for it as well and that is the reduction of potential bugs. The more complex the code, the more difficult it is to keep track of what’s going on and the higher the risk of bugs creeping into your classes.

The accepted values for cyclomatic complexity are as follows:

  • 1 – 10: a simple program with a very low risk of bugs
  • 11 – 20: a more complex program with a moderate risk of bugs
  • 21-50: a very complex program with a high risk of bugs
  • > 50: untestable and obviously a number you want to stay well away from.
Translate »