Why Your Car’s UI Sucks and What Automakers Could Do About It.

Why Your Car’s UI Sucks

Vehicle Navigation & Entertainment Systems UI’s Near Universally Blow, and Here’s Why.

02IQ4xZKSyysJwFq_I’m testing the Ford Focus ST, a hotted up 5-door that singlehandedly erases the car’s family-car drudgery heritage in tire squealing, blasting towards the horizon, laughing maniacally way. Until you try to play a song your on iPhone and the joy stops. No car should ever tell you, “Please fill out the metadata for your music library.” MyFord Touch is a branding exercise and touch screen wrapping Microsofts SYNC, one of the automotive-doms worst User Interface (UI) offenders. Other automotive UIs are marginally better, but there’s very good reasons why car UIs generally suck.

The most obvious reason automotive UIs are horrible is, apart from Elon Musk’s Tesla, car companies are not technology companies – at least not in the traditional sense. Think about the time, energy and number of iterations, software startups put into user interface design and navigation. For a car company that is a major investment oblique to producing a product consisting of aproximately 3,000 components. One that needs to convey the brand experience in the way it drives, sound right, not deafen people inside it, actually go around corners, can be seen out of, carry cargo, can be gotten in and out of easily, meet safety standards… you get the idea. A massive number of considerations go into a vehicle’s design, before you go to the dealership and say, “Wow, that Fiat 500 is too cute.”

Slowly though, the “not being technology companies” issue is changing.  GM for example has not only opened up its in-vehicle and remote APIs for developers, but sponsored a number of TechCrunch Hackathons to spark development.  Ford has adopted the far more limited SYNC AppLink, a suite of APIs that enables for  developers to extend the control of an application running on smartphone through in-vehicle “Human Machine Interfaces (HMI)” such as SYNC Voice Command, Steering wheel and radio buttons.  Additionally the blue oval has embraced the OpenXC, an opensource attempt to abstract user-facing hardware and software from the vehicle.  In part this is a strong recognition of the core problem of in vehicle entertainment and navigations systems – the average lifespan of a vehicle today is 13 years; a smartphone is 6 to 9 months, and consumers have smartphone expectations.

Still media, communications, and navigation interfaces (also known as in-vehicle telematics devices), seem to get short shift in the list of priorities, but following the steering wheel, shifter, doorhandles, they are likely the fourth most interacted with element in a vehicle. That makes this UI hugely important in the hierarchy of interaction. You live with a poor interface here every day of your life with a vehicle, and that maps to your perception of the brand.

Auto makers attempt to compensate, by partnering with out of house technology companies. One issue with this is they are grafting another companies brand experience onto their offering, so there is a discontinuity. Ford and MicroSoft with MyFord Touch and SYNC being the most pertinent example. Why a car company would go to MicroSoft for UI is a bit baffling? Except of course the UI is tied to a fairly complex embedded system, where deeper and darker technical knowledge comes into play. That Ford wraps SYNC in its own MyFord Touch touchscreen interface is the worst of all worlds – a ridged and unfriendly operating system, paired with a poorly executed touchscreen with small hard to hit hot-spots and deep, complex menuing.

Things aren’t hugely better for GM, who is using the CUE (Cadillac User Experience, and future Tron villian) touchscreen, that is built atop Linux and provides an underpowered sluggish experience.  GM’s myLink oddly stands out with as at least being simple and straight forward to operate. Premium brands, like BMW and Audi, hit the best balance wrapping the QNX system from Blackberry, which, coincidentally, is the basis for BB10. Across the board though, the trend towards touchscreen UIs is a problem.

Apple decided against a touch screen MacBook laptop, reportedly to avoid gorilla arm syndrome. Go ahead, hold one arm out, now try to hit a small target like an icon. Poor targeting, and a slightly tired arm. Now, map this experience to burrowing though several menus, to set an equalizer so SirusXM doesn’t sound like a sea of mud. Arm fatigue? Now, let’s break away from the laptop as an example, and put the screen you’re trying to hit at arms length in a vehicle’s center stack and execute all this while you should be paying attention to the road.

The experience equates to arm fatigue, distraction and frustration. This shouldn’t be a shock to anyone who’s taken the time to look at where touch screens work well. Held in your hand, well within your line of sight, largely thumb operated, relatively large target areas (due to proximity) — in short a smart phone or tablet. Either item which is sitting on the bleeding edge of iterative UI design, versus a car which has approximately a 3 to 5-year development cycle and where software updates still require a trip to dealer. Without a better update process your car will always lag about 5 years behind current UI thought.

The center stack placement for a touch screen, is an outright fail. It’s also a concession to the passenger, who would periodically and randomly like to turn down the volume on that NPR or CBC show you’re listening to and talk to you about it right before the crucial conclusion. The historically center stack also saves production costs, as it obviates the needs for both passenger and driver controls. Until, steering wheel mounted controls entered the scene.

The complexity with steering wheel mounted controls is they have to map to the center stack’s functionality. If the center stack is a touchscreen, you’re mixing your UI metaphor by mapping a load of buttons and maybe a roller dial to a touch oriented design. The steering wheel now makes an X-Box controller look straight forwards, and you’re expecting an elderly driver with bad eye sight on a freeway of uncooperative traffic to operate it.

One of the best executed UIs in this area is BMW’s iDrive, which is largely scroll wheel based allowing for relatively consistent mapping. Often though, car companies (Ford and Chrysler, I’m looking at you) don’t even try. Often the functionality available from the wheel doesn’t fully connect with the center stack’s primary communications system. Even if you have a menu button on the wheel, it just scrolls you through the trip, odometer, fuel efficiency readings on the dash. The expectation would be menu affects the center stack.

Lagging guidelines are an issue too.   The Alliance of Automobile Manufacturers is an association of twelve major automobile manufacturers, some of whom sit at the bleeding edge of premium consumer brands, yet their  Statement of Principles, Criteria and Verification Procedures on Driver Interactions with Advanced InVehicle Information and Communication Systems – Human Machine Interface (HMI) guidelines date to 2006.  These guidelines then predate the alpha release of Android Smartphones, and the 2007 release of the first generation of iPhone – consider how far these operating systems and user interfaces have evolved in that time.

Of course, vehicle UIs doesn’t have to be this way. What falls out of these observations, are a few Human Interface Guidelines for the automotive design and a thought experiment applying the leading edge of today’s technologies in an attempt to “future proof” a vehicle’s UI.

Potential Solutions and a Few Speculative Guidelines

Automobiles suffer from scattering the same UI elements across the control areas of the vehicle, touch screen options are replicated on the dash (climate and audio controls) and steering wheel. This is largely out of convenience for automakers, so if a buyer upgrades the navigation or entertainment package, the option is simply slapped in at the factory. This contributes to the problem of controls not operating consistently across scattered locations.

Additionally, these different physical UI elements have varying lifespans. Automakers have to contend with a product that needs to function a minimum of 5 years, and will most likely have a lifespan of 10-20 years. At that point the touchscreen will likely be marred to hell, but those buttons on the wheel and dash will still be working. So, if you can only access specific settings through the touchscreen, that may become a long term ownership issue.

From this we can cull a guideline of Consistency, that is all the functionality should be accessible from all UI locations, in a similar manner. This also helps address the need for actual physical Durability in the interface, which is seldom a concern for software UI design. Durability needs to be a guideline too, the controls need to be physically robust to survive the vehicle’s lifetime. Auto maker’s are aware that a used vehicle says as much (if not more) about a a brand as a new one.

It’s easy to call for Consistency, but another thing entirely to implement it. Currently in vehicles we see the dog’s breakfast mix of UI in touchscreens, touchpads, dials, rollers and buttons accessing varying funcitonality. This fragmentation of input technologies makes it hard to apply the same UI paradigm across control areas. Here’s the thing, the important take away, all those input devices are special cases of a gesture interface. Left or right, up or down, scroll, depress those actions are all gestural, they just happen to interact with a physical element or surface. That becomes an important thought for Consistency, because it lets as look at all the control areas in a unified manner. Yes, there’s a language shift there – control areas, not surfaces.

One things automakers haven’t been is forward looking or predictive of new interface technologies. Mercedes may bundle night view assist (or night vision to the rest of us) into their latest offerings, but how you get there isn’t near as innovative. So automotive companies need to be predictive of User Interfaces technologies now, that will be common place within the next five years. If only because you can’t upgrade physical systems.

So where should automaker’s be looking? Google Glass offers a paradigm for heads up display and interaction based on cards. Gesture based interfaces have arrived. Of note Samsung Galaxy S4 features AirView for accessing details information “behind” a UI element like the e-mail behind a header (essentially a hover gesture), and AirGestrure letting you flip between tracks, photos, or answer calls with a bit of hand waving. Apple is issuing a number of similar patents, which is important because cellphones sit at the leading edge of consumer technology with a ubiquity that educates the massage in the usage of new technologies. To be even more predictive, automakers should be looking at patent trends in UI. So a looser guideline then, be predictive, or even better upgradeable.

One interesting element of these technologies that’s valuable for automakers to draw on is they are minimally intrusive to the driving experience. Heads up displays are well executed on BMW, especially their turn by turn navigation, which is just a special case of what Google Glass offers. Ford has added a gestural control allowing you to open the Escape’s rear hatch with a mimed kick – a very special case. Our next guideline then, minimally intrusive and that needs to apply to input as well as display.

Voice control, already plays an increasingly important roll in automotive UI, because Keyboard-style touchscreen entry of addresses for navigation, or songs for selection are simply distractingly unsafe design. Think driving (large target motion) while undertaking small target motions involved in hitting keys – though a keyboard with only four target zones like SnapKey (http://www.snapkeys.com/en/) offers an alternate solution. Where touch is used then, hotspots need to be large and easily targeted by the drive.

One final thought, a vehicle’s navigation and entertainment functionality should gracefully degrade the loss of internet access. UI elements like voice control can not fail or degrade due to a lack of connectivity, which is why the stopgap of having your vehicle use your cellphone as it media and navigation center is a failing strategy. Also, it ignores the longevity of a vehicle compared to the lifespan of a given smartphone operating system or docking connector. The joy of something the size of a car is that it has a lot more space available to house the required computing power than a cellphone.

So, what would our vehicle’s navigation and entertainment UI look like trying to taking these considerations into account – or at lease my vision of it? Remarkable slim, and cheaper to produce overall.

Let’s draw heavily on MicroSoft’s SYNC, because this is a company that has all the resources, but has failed to meld them together. SYNC is also used by Ford, who is in the middle of a renaissance of quality — so obviously has an interest in improvement. To start, lets banish the touch screen, and replace it with a heads up display center windscreen shared by the driver and passenger. For our gestural interface, let’s use MicroSoft’s Kinect which has a readily available SDK and hardware – the similar systems are already out there mounted on the review mirror of vehicles. Using Kinext, the system could detect if there is a driver, or a driver and passenger – sizing and positioning the heads up display appropriately – one more guideline the UI should be adaptive. For the moment I’m going to assume the driver and passenger aren’t going to be hand talkers – we’re looking a broad strokes here.

On the Heads Up Display (HUD), we’d have major areas of functionality displayed as cards. These cards could be swiped between – Navigation, Phone, Environment, Radio, Music, Vehicle and Service Info, Overall Vehicle Entertainment Settings. Push into a card, and it reveals the next level of menu; say Radio presets to flick between, with the last two cards being station selection for non-presets.

Being an adaptive UI, the cards will be ordered based on usage, potentially with different counts based on the presence of a passenger or single driver. In keeping with our minimally intrusive maxim, menus should be kept as shallow as possible — having to drive down and dig deep into a system while driving is a fail. Selecting between cards amounts to left or right flicks. Selecting a card could be accompanied by a push gesture, driving into the menu. Scrolls, by an up or down flick. Precise selection, say of a radio station, could be accomplished by a stationary select area of the scroll, think iPhone date rollers, and a push gesture. Let’s assume the hand waving doesn’t need to be an exaggerated motion, we are in a small consistent space for the system to identify gestures. Conveniently, we’ve accomplished durability by minimizing actual push-the-button physical interaction.

That’s the premium system, obviously, so for the base model, we take the general case of the gestures and apply it to steering wheel and center console controls. A left button, a right button, an up and down roller (or up and down buttons) in the middle, and a select button (our push gesture) would suffice — in this way we’ve two durable backups to the gesture interface, that are also gesturally consistent. We also have four hotspots with which to use a SnapKey style system for Navigation address entry as a backup to voice control.

One point behind this very basic gestural interface, is it is upgradeable. As long as the basic gestures (scroll up, scroll down, flick left, flick right, push in) are consistent in the system the UI is upgradable at time of service.

What about speedometer and tachometer? Some design elements are best left alone, those gauges have evolved since Carl Benz decided looking at a horse’s ass was passé in 1886. Eventually, just as the speedo and tach elements found their perfect placement, navigation and entertainment systems will find theirs. The guidelines purposed here might not be the perfect path, but it offers an option to guide that evolution culled.

Note: This thought piece was originally published as a two part post on Medium.  Additional details and references have been added for this version.

Additional Developer Links:

 

 

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s