May 3 2014

FUIs meet reality – the need for a unified usable and good looking interface.

Today’s interfaces are floundering. We now have more hardware than we know what to do with, and on top of that hardware we run applications with user interfaces straight out of the early 90’s. innovation is stagnant.

In our entertainment we see amazing looking user interfaces that work very well for the application we see them doing, but we don’t typically see anything of the sort in the real world. we’re stuck with an outdated “desktop” style interfaces that more and more defy their original intention, that they be easy to use.

There is some good work being done on more embedded devices like mobile phones, tv’s, and the like, but I think what we need more of at the moment is unification – or at least enhanced interoperability with a seamless single interface. It would need to be useful and functional on any piece of terminal hardware from a smart watch to a tv screen to a mobile phone to a notebook to a desktop with as many displays as one could imagine to tablets to home automation systems with full integration and consistency when it comes to running applications, the results of those applications, and more critically, a modal user interface that adapts to the current circumstances allowing for enhanced ease of use through displaying the most likely options more prominently.

As far as speed of computation, storage, and i/o I envision a cloud system, not as it is now where you have the client device belonging to the user and it connects to a data center somewhere that provides the services that you want, but rather a cloud of all of the technology a person owns. a sharing of compute, memory, storage resources. a shift of the whole architecture paradigm. if i’m editing a video on my tablet, it should be able to dynamically send blocks of content to my desktop computer or home server for dynamic decompression or to accelerate a composite and store the output as a backup automatically, completely transparently while the device, as under powered as it might be concentrates it’s memory and computation abilities to providing me with the fastest, most usable user experience possible.

the easiest method i can envision to implementing this is by writing a virtual machine that abstracts the underlaying hardware features (cpus, memory, storage, networking, etc.), shares those resources over a secure network connection to the rest of the cloud peer-to-peer style with some code that measures network dynamics and allows for varying levels of protocols for bandwidth management and figuring out weather running code remotely and passing back the output would be faster than doing it locally, etc. doing dynamic video and audio content compression and such in the background to keep everything working as quickly as possible weather 10′ or 10000 miles.

the user interface comes into play when all of this architectural glue code has done it’s work. it needs to be incredibly easy to use, appropriate for use on all levels of hardware even when you don’t have easy access to other hardware for acceleration, and needs to provide for any application need in a unified manner. some examples of this working wonderfully are the computers on the enterprise in star trek the next generation, the “real”, holographic, and HUD displays used in the modern iron man movies. layout in both of these examples is very similar if not identical across various pieces of hardware (iron man’s phone, the suit hud, his holographic workspace, and his desktop all follow the same user interface guidelines and that made things easy to use. – same in the example of the enterprise computers. each of the ship terminals and the PADDs and the shuttles, etc. all used the same layout and everything was networked with one another so you could conceivably use a padd as a remote for a shuttle, or something of the like. – further both examples feature prominent use of both physical, holographic, and voice controls. in recent years our voice recognition has improved fairly drastically, enough to make it useful for many day to day things, but i couldn’t imagine writing this article with nothing but voice command because I type as fast as i think, but I really don’t speak all that well. I pause, I have to reword, restructure my ideas on the page, going forward and backward quite a bit. voice user interfaces are great for commands, but lousy for authoring content.

The interface on my watch, my phone, my desktop, my notebook, my tablet, my car, my boat, my tv, my washing machine, my refrigerator, my toaster, it should all look and feel the same, and it should all be tied in with every other piece of technology nearby. not only would this make it easy to use the resources on any other piece of technology as seamless as providing the user with those options, it would also allow for immediate usability as a person would only have to learn a single user interface, and then they’d know how to use every other piece of technology they have access to.

With the up and coming augmented heads up display technologies like google glass, and soon stereoscopic see-through camera and screen based sunglasses concepts like the vuzix AR line (sunglasses that have a camera in the place where each eye would be and the user looks at screens that display what the eyes would ordinarily see but with augmented reality data applied on top of the real life data gathered from the cameras). what will this do for user interfaces? make them as free form as you can imagine. to the user it will appear as if you can float words, graphics, “video windows”, and the like in mid-air. having a streamlined unified user interface that takes this into account would be rather amazing. it’s a whole new challenge for user interface development and it gives everyone in the field the chance to figure out these new problems.

I believe the idea of the “heads up display” approach to head mounted display technology is flawed from the start, and that seems to be what google’s model is with glass. I think the best approach would be to apply informational user interface elements to real world objects be it listing a person’s biography beside their face along with the status and notes of any projects having to do with them if they are a co-worker or perhaps showing how much gas your car has in it as you approach it, perhaps warning you of any detected failures or maintenance needed (tire air low? need an oil change?, showing the lowest prices and ratings on various products as you look at them in a shopping mall, or simply providing the means to generate a visual holographic interface as seen in iron man whereby you can deconstruct scans of objects in 3d, allowing one easily inside of engineering problems, never run out of display real estate as you can generate as many virtual video windows of any size. once this technology matures, we may not even need video output devices at all but that still means we’re going to need modern user interfaces to drive such technology. they need to be pretty. they need to be dynamic. they need to be ridiculously easy to use, you should be able to glance at them and know everything there is to know about them and they should be fully realized, ubiquitous, and fully networked, so that we shouldn’t ever have to worry about problems like not having enough local storage, computational power, memory, etc.

tl;dr: FUIs need to be real world UIs, they need to be consistent, pretty, and useful in every situation like LCARS or JARVIS.

Leave a Reply