We can show the value of Codeweavers User Centred Design philosophy by comparing and quantifying the amount of time, and therefore money saved by using the Codeweavers platform.
In order to make Codeweavers products more useful, usable and valuable to our customers, our User Experience team apply many different formative and summative methods for testing and evaluating all of the cool stuff that we build. The value of some of these methods may not always be immediately visible, as they rely on interpretation of analytics data and empirical, sometimes anecdotal information to be translated into a difficult-to-quantify experience.
In the B2B space, it is usually difficult to recruit specialised participants for testing and even more difficult to get consistent access to enterprise environments - yet we consistently find that B2B interfaces generally perform poorly when it comes to usability and would get the most benefit from formal usability testing. The average number of problems in B2B interfaces is almost double that (*measuringu.com/problem-frequency/) of consumer software and a magnitude higher than websites.
To get around the problem of limited testing, and to demonstrate the value of good User Centred Design, we have recently implemented a new technique which quantifies both how effectively a product performs for users, and also the effect this has on our customers bottom-line.
Christian Rohrer at Intel Security devised a method called PURE (Pragmatic Usability Rating by Experts), (*https://measuringu.com/article/a-pragmatic-approach-for-scoring-product-usability/) which is an analytic usability-evaluation method that is rapidly deployed, reliable and valid. PURE relies on usability experts making judgments on the difficulty of fundamental steps that well-defined user groups would take to complete tasks.
Tasks are broken down into steps and rated from 1-3, where 1 is easy and 3 is most difficult. The goal is to reduce user effort and to obtain as low a PURE score as possible.
In the example shown, friction points within a process can be quickly identified, prioritised, and ultimately changed to improve the ease of use.
We have developed a twist on this method, which is to take a panel of expert users (typically, someone with knowledge of interface design heuristics, as well as the user group and domain), assign a score and also time each of the steps required to perform a task.
This also allows us to make a comparison between an expert user performance and an ‘average’ user's performance by measuring the time on task and understanding the gap between the two levels of expertise. The smaller the gap, the better the design is.
By noting the friction points and altering the interface to simplify the step and reduce the time taken, we help our customers increase their efficiency and reduce the costs associated with performing tasks.
In the example below, we compare the same process on 2 versions of our Showroom system. In V1, setting up an offer takes 306 seconds, versus 212 seconds in V2. It’s not all smooth sailing though… There are three major points of friction in V1 compared to two in V2. The friction in the third step in V1 was reduced, but this made step four more difficult and time consuming, so solving one problem created another problem further down the chain.
However, the crucial addition of time to the original PURE method shows that, while the change we made wasn't perfect, an average user still potentially saves up to 94 seconds overall by using V2.
If we extrapolate this out a little bit… let’s say the user performs the same task 20 times in a day, they save 1880 seconds (31 minutes). Multiply the cost per time unit of the operator by the amount of time saved. If a similar saving occurs multiple times per week across multiple users and multiple processes, the benefits of this saved time rack up pretty quickly.
There are nuances to this technique - time on task is not the only measure of efficiency. In particular, the error rate (the number of errors that occur during the task) needs to be addressed, which can sometimes mean adding friction to a task in order to increase accuracy. This may increase the initial time on task but can save a considerable amount of time in corrections and still makes processes more efficient.
To be clear, this is a part of and not a substitute for usability testing, but it does give a reliable indication of friction points and optimisation potential for all Codeweavers platform products. It is extremely valuable in an interface that’s unlikely to be tested with users (because of cost, priority, or difficulty accessing the environment in which it is used) and when a metric about the experience is needed for management.
As previously mentioned, there is also scope to overlay and compare these scores and timings with observations and analytics data of a specified group of users ‘in the wild’, to further verify that the changes we make are effective.
If you have any questions about PURE or other UX methodologies, or would even just enjoy a heated debate, drop us an email to firstname.lastname@example.org