Friday, October 19, 2007

An Image Compositor Technique for a Planet


I had a need to develop a capability within Titan Class Vision to composite many images representing parts of our planet at different resolutions. In particular, these images could be quite large; perhaps 200MB each.

I decided to give memory mapped files a go using the low-level paging mechanisms available to modern operating systems. In a nutshell we’re looking at the use of mmap and munmap. Memory mapping files is very fast and highly optimised - it is by far the fasted way of reading bytes in to memory from disk.

My main image of the planet is about 200MB in size. What I do is have a Windows BMP file mapped saved in BGRAUnsignedInt8888Rev form (this is the optimal format for something to be textured on Mac OS X) and then use memory mapped file IO to page the BMP into memory. I then have an image compositor object that is able to composite many of these images e.g. I have a BMP for the entire planet, and one for just a given state of Australia (NSW). When it comes to client code making a request the compositor assembles a composited buffer of my layers as required and in the resolution required. This composited buffer is then sent to a texture using GL_STORAGE_SHARED_APPLE as an optimisation. I also keep some of these textures around for situations where I know in advance that I'm going to zoom in on something; it is then consequently very fast when it comes to rendering as no dynamic composition is required.

Oh, and I'm using the Mac OS X Accelerate Framework for high quality and high performance scaling given that Titan Class Vision targets this platform.

It all works pretty well and is fast, even on a tired old G4 Powerbook. Multiprocessors are utilised given the Accelerate Framework. One naturally has to be considerate of virtual memory usage with the compositor, but that is a resource management exercise.

Here’s the class structure that I came up with; feel free to use it in your own work but please be kind and include a reference to this page and some credits.


namespace WorldLayerCompositor {
typedef unsigned long BGRA;

inline unsigned short SwapInt16LittleToHost(
unsigned short arg
) throw();

inline unsigned long SwapInt32LittleToHost(
unsigned long arg
) throw();

class Layer {
public:
virtual ~Layer();

inline void GetCentreLatLong(
double& outLat, double& outLong
) const throw();

inline double GetResolution() const throw();

inline void GetSizePx(
unsigned& outWidthPx,
unsigned& outHeightPx
) const throw();

virtual BGRA* GetBuffer() const throw() = 0;
};

class MemoryMappedBMPFileLayer : public Layer {
public:
MemoryMappedBMPFileLayer(
const std::string& inFile
) throw();
virtual ~MemoryMappedBMPFileLayer();

virtual BGRA* GetBuffer() const throw();
};

class LayerCompositor {
public:
LayerCompositor() throw();

typedef std::list<boost::shared_ptr<Layer> >
LayerList;

LayerList::iterator AddLayer(
boost::shared_ptr<Layer> inLayerP
) throw();

void RemoveLayer(
const LayerList::iterator& inLayerIter
) throw();

bool GetNextSubBuffer(
unsigned long** inBGRAUnsignedInt8888RevPP,
unsigned& outSubXPx,
unsigned& outSubXPy,
unsigned& outSubWidthPx,
unsigned& outSubHeightPx
) const throw();

void GetEffectiveSizePx(
unsigned& outWidthPx,
unsigned& outHeightPx
) const throw();

inline void SetSubRegion(
double inLat, double inLong,
double inResolution,
unsigned inWidthPx, unsigned inHeightPx
) throw();
};
};


I intend to evolve this class much further and make it considerate of planet related things. For example if a request is presently made for a region that extends over the dateline then I move the region back either east or west. In the future I’ll be handling this so that the compositing considers the dateline.

Another thought is to be able to add layers described in vector terms using SVG and GML... I think that there are some interesting possibilities.

Monday, July 16, 2007

Using The GML’s Moving Object Status for presenting Flight Information



This is something that I’ve wanted to share for sometime now - how to use the Geography Mark-up Language (GML) for tracking things that move in the real world. GML can be a lot to get one’s head around so I hope that this real-world implementation will help you. We have been using GML for sometime now with our Titan Class Vision product.

I want to show you how GML can be used to describe flight data; the sort of data you’ll see presented in the arrival and departure halls of airports. I have a hope that the world can share a view on how airport flight information should be provided to many kinds of application.

GML provides a set of abstractions that are intended to be specialised by applications for their own purpose. Consequently we have defined a schema to describe journeys. This schema is represented by the class diagram above and is available in XSD form from our website.

Journeys describe a track between point A (departure port) and point B (arrival port). They have a scheduled departure and scheduled arrival time. A specific type of journey is a flight which has no other characteristic than its type for distinguishing itself.

A Journey is a specialisation of a GML DynamicFeature. Dynamic features are things that have a relationship to geography (as per a regular Feature) but can also change over time and record when they do so. Consequently snapshots of a dynamic feature can be expressed given the availability of a validTime element. Additionally, and importantly, dynamic features can also have a “track” element. Tracks define a set of objects that describe a feature’s status over a period of time. These objects are known as GML’s MovingObjectStatus and minimally describe the time that the status applies to (validTime) and the location of the dynamic feature. A location can also be “null” i.e. unknown; this is pertinent to us and I’ll come back to it. The other elements are also very useful when tracking the position of a dynamic feature. For example, given acceleration, speed, bearing (horizontal and vertical), elevation etc. one can reasonably predict where the feature will be next. For most flight information displays in airports though, these elements are less important (Titan Class Vision at the airport is attempting to change this situation!).

We have further specialised MovingObjectStatus as a JourneyStatus with the discriminating element being the Estimated Time of Arrival (ETA). Thus for a given validTime we can report what the ETA was.

Enough words! Let’s have an example based on our journey schema:


<j:FeatureCollection ...>
<name>Journeys</name>
<boundedBy>...</boundedBy>

<featureMembers>
<j:Flight>
<name>QF32</name>
<validTime>
<TimeInstant>
<timePosition>
2006-05-01T10:30:00Z
</timePosition>
</TimeInstant>
</validTime>
<track>
<j:JourneyStatus>
<validTime><TimeInstant>
<timePosition>
2006-05-01T10:30:00Z
</timePosition>
</TimeInstant></validTime>
<location><Null/></location>
<j:ETA>
2006-05-02T00:05:00Z
</j:ETA>
</j:JourneyStatus>
</track>
<j:departPort xlink:href="..."/>
<j:departTime>
2006-05-01T11:15:00Z
</j:departTime>
<j:arrivePort xlink:href="..."/>
<j:arriveTime>
2006-05-02T00:00:00Z
</j:arriveTime>
</j:Flight>

</featureMembers>

<validTime><TimeInstant><timePosition>
2006-05-01T10:30:00Z
</timePosition></TimeInstant></validTime>

</j:FeatureCollection>


Please note that the above is not valid XML i.e. I’ve removed the namespace declarations and the xlink:href contents so that I could fit everything on the page and just present what is relevant to this discussion. A valid version of the above can be found on our website.

The above document states, “here are the flights as at 1030 GMT. There is just one flight named QF32 which is scheduled to arrive at midnight GMT. However the current ETA is 0005 GMT.”. In the real world there would be a number of Flight elements for a given airport and a number of JourneyStatus elements if there have been a number of ETA revisions. The location is not important for most flight information displays at airports and hence we show that as having a null value.

ETA could have been included as a element of Journey. I chose to have it as an element of JourneyStatus (aka MovingObjectStatus) so that I could have a number of ETAs reported for historical reasons. Additionally the reporting of an ETA could be further analysed by location.

Plans are deemed other features (including other Journey features) that can further describe a given Journey feature. For example we have journeys that describe the frequently used flight paths between various destinations. These paths can include various way-points and record typical elevation and speed at those points.

Our schema can also be used to describe a schedule such as a flight schedule. The validTime element of a Journey feature has the potential to be used to distinguish different schedules depending on the day of the week. For example a scheduled flight might only occur on Mondays and Fridays.

I’m hoping that it is becoming apparent that more than just the ETA of a flight can be described. Firstly the structure of our schema permits the actual positions of flights at given time instances. We have embarked on consuming air radar data using our journey schema. This will enable us to render the real-time track of aircraft with our flight information display.

Secondly anything that has a journey should be able to be described. This includes trains, ships, cars and trucks - even people swiping their security cards.

In a future blog I shall describe various forms of accessing journey data using the Web Feature Service (WFS) and also how this data can be published and subscribed using an enterprise messaging bus such as IBM’s Websphere MQ and others that use the Java Messaging Service (JMS).

Meanwhile if you find that our journey schema is useful to you then please add a comment or do get in touch.

Thursday, April 26, 2007

Titan Class Vision & Quartz Composer



I’ve just about finished integrating Quartz Composer with Titan Class Vision. For those of you that know nothing of Titan Class Vision, it provides a global flight information display that tracks aircraft in real-time. Our "Google Earth" type display presents new revenue generation opportunities for airports and also serves as a good airport-customer initiative. Titan Class Vision has been installed on two displays near the A/B exit of the Terminal 1 Arrivals Hall in Sydney International Airport (T1), Australia.

Before I describe my initial requirements and how I’ve integrated Quartz Composer here is a movie showing the animated integration.

I needed to consider using Titan Class Vision with Plasma Display Panels and thus minimise burn-in issues. Titan Class Vision is already quite animated zooming between various resolutions. However more movement was required to remove static text and a header and footer margin.

I could have programmed these animations into Titan Class Vision directly but I’ve always felt that something like Quartz Composer should be composing Vision’s image with other digital media and effects. By externalising most of the content (other than the planet and the 3d objects that get overlaid on to the planet e.g. the flight paths), I can now customise the display contents quite easily for individual customers - and with great effect!

Before I continue thanks to everyone on the Quartz Composer forum that have helped me over the past few weeks. In particular thanks go to “tkoelling”, Alessandro Sabatelli and Pierre-Olivier Latour.

The approach that I’ve taken is to render my 3D world into a Core Video (CV) buffer. The viewport is the size of the screen and is rendered once per frame. I then pass this CV buffer to Quartz Composer (QC). QC and CV share their OpenGL resources (context). Thus the image that I have rendered remains on the GPU side of the fence (I think that CV creates my image as a texture). QC receives the CV buffer as an image parameter and my scripts can do what they need to do. Pierre describes how to do QC/CV integration on the forum and I also posted some code there.

Titan Class Vision makes no assumption with regards to the name and location of the QC file. Instead I use AppleScript to pass in the path of the composition and this causes the instantiation of a new QCRenderer object (releasing any previously held one of course). At this time I also reset the timing for my composition if it is a different file to the last one. In summary I have some external program (which I actually call “Bootstrap”) that tells Titan Class Vision what to do - now including telling it where my composition file is. The Bootstrap is an Applescript Studio application that bundles the composition file as a resource.

Bootstrap tells Titan Class Vision to zoom through different resolutions on a periodic basis - typically every 30 seconds. It now also sets up input parameters to the QCRenderer so that parts of my composition are enabled/disabled. I typically have one composition file per client’s implementation and enable sub-patches as necessary depending on what I want to show. This approach eradicates any performance impact of instantiating a new QCRenderer object (not that there appeared to be much of an impact though). Note that I do release/alloc a new QCRenderer if my context is re-shaped due to a bug with QC taking note of its viewport only on instantiation.

For your interest, Bootstrap passes input parameters to my QCRenderer via Applescript as a URL encoded string parameter e.g.

set composition parameters to "Fade_In=1&Fade_Out=0&World_Actual_Size=1&World_Zoomed=0"


That’s about it from a high level perspective other than to say that I can easily get 60fps on my relatively slow development machine (dual G4 1Ghz). I haven’t released my changes to my Quad Xeon yet but will do so in the next couple of months and all of this should fly (pun intended)!

Please note that when rendering to a CVOpenGLBuffer it must be treated as something that is immutable i.e. after you have written to it and passed it to the QCRenderer do not try writing to it again. This is in fact exactly what I was doing on Tiger and all was well. However with Leopard I had a problem given that the QC imaging pipeline had apparently changed.

The resolution is to use a CVOpenGLBufferPool and then use CVOpenGLBufferPoolCreateOpenGLBuffer each time you want to render a new frame. After the frame has been passed to the QCRenderer you then release the CVOpenGLBuffer. Problem solved.