By Paul Tidwell
Vice President of Technology, projekt202
In advanced organizations that are leveraging a user-centered approach to design and development, we have reached an unprecedented level of maturity by creating compelling user experiences based on observational data of what users need.
While few organizations are equipped to do the fieldwork and synthesis themselves, the broader appetite to leverage techniques for turning quantitative and qualitative data into software continues to gain acceptance and is regularly performed by specialized teams. As a result, dashboards are more targeted, context sensitive and powerful.
Furthermore, line of business solutions have reduced time on task and increased accuracy of input, and modern solutions require less training and documentation which collectively reduce cost of ownership dramatically. We have evolved the design process and made it possible to attenuate effort to only what is needed, resulting in greater adoption and retention, while effort spent on features that won’t be used has been marginalized.
Meanwhile, in the software development arena, the technology world has been making major headway in machine learning by moving beyond academics and into implementation and integration of these systems to yield amazing outcomes for businesses.
Machine learning allows us to create systems that are instructed by a user to make decisions or feed massive amounts of data to learn to predict desired outcomes. Some early adopters of this technology were marketers, developing retargeting campaigns in e-commerce where buying behaviors and shopping history have been cross-referenced with a massive data pool of other user behaviors and conversion patterns to suggest items a buyer needs. Many of the obvious uses have been implemented and are now considered old news.
More complex implementations in industrial settings are looking at data lakes of time series data for temperatures, RPMs, environmental data, velocity, fluid levels and more to predict points of failure in complex systems, and optimize around downtime for maintenance and optimize production during uptime.
Imagine having a modeling tool that allows turbine operators to view financial outcomes of pushing hardware to upper thresholds to output more power and shorten life of certain parts against models that yield lower output yet optimize parts lifetime, all while considering parts cost, external factors that influence downstream value creation, and market conditions as well as the mechanical and thermodynamics components. Machine learning allows complex datasets to be correlated, measured and anticipated.
These new applications – backed by powerful, deep-learning algorithms – still benefit from the currently-emerging best practices in application design and development. In many cases, the current paradigm works well to surface data produced by machine learning solutions.
So, all is well, right? For now that holds true, but not for the future.
We need a new paradigm for user interfaces to fully realize the power of machine learning-based solutions. Systems of engagement are often statically defined, based on what we observe a user needs at that time. As the systems behind the user interface get smarter and seek to teach the user how to optimize business, the user interface needs to become dynamic enough to permit the machine to surface different KPIs and types of data. These are often relational in nature, in meaningful ways to realize their potential.
These machine-learning solutions will help humans find and optimize bottlenecks. As the users of that system make operational changes to eliminate that initial bottleneck, a new point becomes the bottleneck. Then, that bottleneck is optimized out of existence and the cycle repeats itself.
Also, consider that as new systems evolve, completely new elements and constraints are introduced to the system. An ML-enabled system would present each optimization opportunity in sequence, even though each represents a completely unique set of correlated data and observations, and thus requires different needs for an interface expression.
The new dashboard – the machine learning-enabled system of engagement – needs to surface ever-changing visualizations in evolving contexts to be useful. New data points and relationships become relevant as the current point of focus is optimized out of immediate relevance.
Likewise, the user interface must surface and suppress data visualizations that present the issue in a meaningful way. This does not mean user-centered design is rendered ineffective; actually, quite the opposite. The scope of that design effort is expanded somewhat, and the outcomes are designed into a set of heuristic driven interaction specifications and libraries to be employed.
Development will need to take these designs and implement them as components driven by rules that map back to the underlying system by building an engine that responds to ML-generated cognition events or emphasis events from the underlying data driven system. How to build those services and engines is another topic.
As the ML system shifts emphasis to a new set of data points, the UI will construct or render new compositions in response, pushing forward the most relevant data rendered in the most appropriate form for the user persona at hand and supporting the user’s current context. Perhaps a machine will never learn why a human being did something in a certain way, but certainly it will give powerful prediction engines to the humans to make the right decisions.
In conclusion, current techniques evolve and still provide a foundation for what is to come. The application and implementation of them must take the next evolutionary step to support this emerging technology.
For now, user experience professionals can sleep well, since these learning machines are a long way away from being able to design a decent user experience on their own, but soon, we’ll need a solution for ML-driven, dynamic data visualization and user interfaces.