Monday, 15 July 2013

Rig Status Update

I've been continuing my work on the Rig project I introduced back in September, as well as helping add Wayland support to GnomeShell, and was feeling bad that I haven't made time to post about the progress of either project and so wanted to give a quick status update for Rig...

I think the easiest way to get a quick idea of how Rig has been shaping up is this overview video I made back in May that goes over the initial UI layout and features:
video

The main thing that's been changing UI wise since I made that video is that the bottom area is evolving beyond just timeline management into an area for "controllers" that must handle more then simple key-frame based animations.

Controllers will support several methods of controlling properties, where key-frame animations would be one way, but other methods would be physics simulation, expressions that relate properties together and all manner of other high level behaviours. As an example of a behaviour Chris Cummins, one of the interns I've been working with, is experimenting with a Boids based flocking simulation which might offer us a fun way to introduce emergent, nature inspired aesthetics to the backdrop of a device.

Probably the biggest architectural change in Rig is that it's now designed to be connected directly with a device that you are designing for to enable immediate feedback about performance, responsiveness and quality on real target hardware. We've added a networking layer using avahi and protocol buffers to discover devices and for synchronizing UI changes made in the designer with the device.

Rig is aiming to help optimize the workflow of UI development and it seems that one of the biggest inefficiencies today is that interaction and visual designers often use tools that are completely unconstrained by the technologies and devices that will eventually be used to realize their ideas.

Going further, the plan is to directly incorporate remote profiling visualization capabilities into Rig so that we can also allow device metrics to influence the design stages as early as possible instead of only using them as a diagnostic tool. UIs need to work within the limits of the hardware they run on otherwise the experience suffers. If we don't move to a situation where real metrics can influence the design early we either have to continue being ultra conservative with our UIs or we risk big problems being discovered really late in the development process that can either force us back to the drawing board or leave us scrambling to fix the technology under pressure.

To keep this update relatively short here's a quick run-through of the work that's been going on:

  • UI design work
    • Thanks to Mikael Metthey for his help creating mock ups for Rig, clearly a marked improvement over the very first visuals we used:


  • Device connectivity - as mentioned above.
  • Neil Roberts has worked on basic OSX support.
  • Neil also added binary update support since we'd like to aim for a browser like development model of continuously rolling out small features so once Rig is installed it will automatically evolve, getting better over time.
  • Bump mapping support for 3D models, for detailed lighting effects over complex models.
  • A pointillism effect by Plamena Manolova as a fun example of a generative art algorithm that can be handled efficiently on the GPU.
  • Default cylindrical texture mapping of models that don't have their own texture coordinates.
  • Plamena is currently implementing an algorithm for tiling images across an arbitrary mesh.
  • Plamena has added initial support for real-time fur rendering, another visual style that clearly diverges from what's achievable with the traditional PostScript rendering model.
  • Chris has been working on a particle engine.
  • Chris has also been working on a Boids simulation engine to emulate flocking behaviours. The inspiration for this basically came from an exhibition made by Universal Everything: (Full disclosure - I work for Intel, although the point here isn't the advertising, but just the idea of bringing more natural forms into user interfaces.)
  • We've made some early progress experimenting with WebGL support.
    • The low level graphics layer of Rig now supports WebGL but we have some further integration work to do still in the higher level engine code before we can really test the viability of this idea.
  • Drag & Drop, Copy & Paste - work in progress. Not having this is really getting in the way of lots of other important interaction and feature work that we need to move onto next.
Next week I'm off to Siggraph where we'll be publishing a whitepaper explaining why we feel there is a big opportunity to really raise the bar for how we manage UI development and we'll also have a short video introducing the Rig project.

I'll also be at Guadec in August and I'd love to chat with anyone interested in Rig.

I'll try not to wait so long before posting my next update here.

Wednesday, 31 October 2012

Rig 1 - A UI designer & engine


For anyone interested in graphics and in the design of user interfaces (UIs) I hope I can invite you to take a look at our release of Rig 1 today. Rig 1 is the start of a new design tool & UI engine that I've been working on recently with Neil Roberts and Damien Lespiau as a means to break from the status quo for developing interfaces, enabling more creative visuals and taking better advantage of our modern system hardware.

In particular we are looking at designing UIs for consumer devices with an initial focus on native interfaces (device shells and applications), but also with an eye towards WebGL support in the future too.

With Rig we want to bring interactivity into the UI design process to improve creativity and explore the possibilities of GPUs beyond the traditional PDF drawing model we have become so accustomed to. We want to see a design work flow that isn't constrained by the static nature of tools such as Photoshop or mislead by offline post-processing tools such as After Effects. We think designers and engineers should be able to collaborate with a common technology like Rig.

Lets start with a screenshot!


This gives you a snapshot of how the interface looks today. For Rig 1 our focus has been on bootstrapping an integrated UI design tool & rendering engine which can showcase a few graphics techniques not traditionally used in user interfaces and allows basic scene and animation authoring.

If you're wondering what mud, water, some trees and a sun have to do with creating fancy UIs; firstly I should own up to drawing them, so sorry about that, but it's a sneak peek at the assets for a simple "Fox, Goose and Corn" puzzle, in the style of a pop-up book, we are looking to create for Rig 2.

I'd like to highlight a few things about the interface:

It all starts with assets:

On the left of the UI you see a list of assets with a text entry to search based on tags and names. Assets might be regular images, they could be 3D meshes, they could be special ancillary images such as alpha masks or normal maps (Used for a technique called bump-mapping to give a surface a perturbed look under dynamic lighting).

Assets don't have to be things created by artists though, they might also be things like scripts or data sources in later versions of Rig.

The basic idea is that assets are the building blocks for any interface and so that's why we start here. Click an asset and it's added to the scene and assets can sometimes even be combined together to make more complex things, which I'll talk more about later.

Direct manipulation:

In the center is the main editing area where you see the bounds of the device currently being targeted and can directly manipulate scenes for the UI being designed. We think this ability to directly manipulate a design from within the UI engine itself is going to be a cornerstone for enabling greater creativity since there is no ambiguity about what's possible when
you're working directly with the technology that's going to run the UI when deployed to a device.

These are the current controls:
  • Middle mouse button to rotate the whole scene
  • Shift + middle mouse to pan the scene
  • '+' and '-' to zoom in and out
  • Left click and drag to move an object (object should not already be selected)
  • Left click and drag a selected object to rotate
  • Ctrl-Z and Ctrl-Y for Undo and Redo respectively
  • Ctrl-S to Save the current document



In-line previews...

Without leaving the designer it's possible to toggle on and off the clutter of the editing tools, such as the grid overlay, and also toggle fullscreen effects such as the depth-of-field effect shown here.

Currently this is done by pressing the 'P' key.

Editing properties:

Properties are another cornerstone for Rig but first I should introduce what Entities and Components are.

A scene in Rig is comprised of primitives called entities which basically just represent a 3D transformation relative to a parent. What really makes entities interesting are components which attach features to entities. Components can represent anything really but examples currently include geometry, material, camera and lighting state.

When you click an asset and that's added to the scene, what actually happens is that we create a new entity and also a new component that refers to the clicked asset which is attached to the entity. The kind of component created depends on the kind of asset you click. If you click an asset with an existing entity selected, that lets you build up a collection of components attached to a single entity.

Each component attached to an entity comes with a set of properties. The properties of the currently selected entity and those of all attached components are displayed on the right hand side of the interface. The effect of manipulating these properties can immediately be seen in the main editing area.

The little red record button () that you can see next to some of the properties is used to add that property into the current timeline for animating...

Timelines:

Once you've built up a scene of Entities then authoring animations is pretty simple. Clicking the red record button of a property says that you want to change the property's value over time and a marker will pop up in the timeline view at the bottom. Left clicking and dragging on the timeline lets you scrub forwards and backwards in time. If you scrub forwards in time and then change a recorded property then a new marker pops up for that property at the current time. Left clicking markers lets you select and move marks. Property values are then automatically interpolated (tweened) for timeline offsets that lay in between specific markers.

These are the current timeline controls:
  • Left click lets you scrub the current timeline position
  • Left clicking markers lets you select marks; Shift + Left click lets you select multiple markers
  • Left clicking and dragging lets you move selected markers left and right
  • Delete lets you remove markers
  • Ctrl-Z and Ctrl-Y also let you undo and redo timeline changes

Effects

We started with a fairly conservative set of effects for Rig 1, opting for effects that are well understood and widely used by game developers. This reduced some of the initial development risk for us but there is a chance that our choices will give the impression we're simply trying to make UIs that look like console games which isn't the intention.

Modern GPUs are extremely flexible pieces of hardware that enable an unimaginably broad range of visual effects, but they are also pretty complex. If you really want to get anything done with a GPU at a high level you quickly have to lay down some constraints, and make some trade-offs.

The effects we started with have a long history in game development and so we know they work well together. These effects emphasize building photorealistic scenes but there are lots of non-photorealistic effects and generative art techniques we are also interested in supporting within Rig. Since these are more specialized we didn't feel they should be our first focus while bootstrapping the tool itself.

Lighting

I'm sure you can guess what this effect enables, but here's a video anyway that shows a 3D model loaded in Rig and how its colour changes as I rotate it or if I change the lighting direction:

video

Although I don't envisage using glaringly 3D models for core user interface elements I think there could be some scope for more subtle use of 3D models in live backgrounds for instance.

Shadow mapping

Shadow mapping is a widely used technique for calculating real-time shadows that basically involves rendering the scene from the point of view of the light to find out what objects are directly exposed to the light. That image is then referenced when rendering the objects normally to determine what parts of the object are in shadow so the computed colours can be darkened.

Shadows can be used to clarify how objects are arranged spatially relative to one another and we think there's potential for user interfaces to use depth cues perhaps to help define focus, relevance or the age of visible items but also for purely aesthetic purposes.

video

Bump mapping

This is a widely used technique in game engines that lets you efficiently emboss a surface with bumps and troughs to make it more visually interesting under dynamic lighting. For an example use case in UIs, if we consider where we use simple silhouette emblems in UIs today, such as status icons you might be able to imagine introducing a subtle form of lighting over the UI (maybe influenced by touch input or just the time
of day) and imagine the look of light moving over those shapes.

Rig provides a convenient, standalone tool that can take an image and output a bump map, or take a bump map to output a normal map. These examples show an original icon, followed by its bump map and then its normal map:


Note: There are sometimes better ways calculate normal maps for specific use cases but this tool at least gives us one general purpose technique as a starting point. An artist might for example at least want to hand tune the bump map before generating a normal map.

This video shows the gist of what this effect enables, though for a more realistic use case I think it would deserve a more hand-tuned bump map, and more suitable texture mapping.

video

Alpha masks

Rig provides a way to associate an alpha mask with an entity that can be used to cut shapes out of an image and since the threshold value (used to decide what level of alpha should act as a mask) can be animated that means you can also design the masks with animations in mind. For example if you have a gradient mask like this:

If we animate the threshold between 0 to 1 we will see a diagonal swipe in/out effect applied to the primary texture.

This video shows a simple example of animating the threshold with two different mask assets:

video

Depth of field

Rig implements a basic depth of field effect that emulates how camera lenses can be made to bring a subject into sharp focus while leaving the foreground and background looking soft and out of focus.

This example video alludes to using the effect for moving focus through a list of items that extends into the depths of the screen.
video

Status

At this point Rig can be used for some kinds of visual animation prototyping but it isn't yet enough to create a standalone application since we can't yet incorporate application logic or data.

Our priority now is to get Rig to the point where it can be used end-to-end to design and run UIs as quickly as possible. As such we're defining our next milestones in terms of enabling specific kinds of UIs so we have concrete benchmarks to focus on.

Our next technical aim is to support application logic, basic input and the ability to interactively bind properties with expressions. As a milestone benchmark we plan to create a Fox, Goose and Corn puzzle, chosen due to its simple logic requirements and no need to fetch and process external data.

The technical aim after that is to support data sources, such as contact lists or message notifications as assets where we can interactively describe how to visualize that data. The benchmark for this work will be a Twitter feed application.

I'm particularly looking forward to getting our ideas for data integration working since the approach we've come up with should allow much greater creativity for things like showing a list of contacts or notifications while simultaneously also being naturally efficient by only focusing on what's visible to the user.

Summary

So hopefully if you read this far you are interested in what we're creating with Rig. We're hoping to make Rig appeal to both Designers and Engineers who are looking to do something a bit more interesting with their interfaces. I'd like to invite the brave to go and check out the code here, and I hope the rest of you will follow our progress and feel free to leave comments and questions below.