In this project I cover the thoughts and processes behind redesigning the way scheduling and sending email works for AWeber customers.
Before we dig into the description let's take a look at what the design used to be for comparison's sake.
Customers, specifically Americans, found the idea of "Queueing" a message very confusing.
Queue simply isn't a commonly used word in our vocabulary.
Using the word Queue in the interface was a choice made by our engineering team years ago, partly as a good willed gesture to imply how our back end system is actually processing our 100k+ customers sending email simultaneously.
"Oh no! Is this message already sending?!"
- a customer panicked during our benchmark usability testing
We found that customers would initially get caught up on if a message was currently scheduled or not.
Even worse, some directly faulted themselves for "not getting it"... and honestly who can blame them?
At first glance label "Pending Broadcasts" can be misleading. It implies that the messages in this table are scheduled and waiting to send — in reality it's a mix of draft messages and scheduled messages.
On the table above, "Send Date" was populated with false information (date created) until the user actively scheduled the email for a different date.
Digging into how much more confusing this can be, for example purposes, assume that today is December 1st, 2010.
Now, there is a message I'd like to send that I created one month prior (November 1st, 2010).
When I'm queueing the email to send today (December 1st, 2010), this is the dialog box I am greeted with.
Because the "Send Date" was populated with the "Date Created"... it appeared as though customers could actually email the past :).
A few parts of this design either didn't make sense or were just plain noisy.
Showing that a message was sent to All Subscribers was redundant considering most messages are sent to an entire list and the exception tends to be when targeting specific segments of subscribers.
Differentiating between the type of message (HTML vs. Text/HTML) was confusing because they weren't labeled very intuitively, not to mention that we find that most people who prefer sending a specific message type tend to stick to it. Furthermore, when using our new editor, this feature was actually made obsolete because we automatically populate a plain text version of every email sent to subscribers.
Last but not least, let's look at the performance statistics.
Using raw open rate data leads to weird unusable statistics (e.g. 416% of people opened your message).
"How can my open rate be over 100%?"
- a customer asked baffled during testing
Customers came to a few different conclusions as to why their open rate could be over 100%.
Some though it had to do with subscribers forwarding messages to friends. Others thought maybe they were using the system wrong somehow.
What is the real answer?
When using raw statistics we count a single person opening an email multiple times.
We've found that if a subscriber uses a program like Outlook, when they load the program it defaults to the last email received. If they check their email once every hour during an 8 hour work day, that is the equivalent of 8 opens.
As with any project, before you seek a solution it's imperative that you to do the ground work to make sure you fully understand the problem you're setting out to solve.
It's also important to realize the ripple effect from making changes of this magnitude.
Teams I've consulted with for analysis include members from Customer Solutions, Education, Development, Data Management, Design and Marketing.
Depending on the nature of the project I'm working on, sometimes I'll draw a variety of potential solutions on a whiteboard before using software such as Balsamiq or Mockflow.
Once I'm satisfied with the wireframe I'll then digitalize the solution and create a page that documents the project on Confluence. As time has progressed I've found this to be the most effective way when communicating with large groups of people.
The wireframe below addresses the issues with the original design:
After wireframing the design and going through rounds of stakeholder sign off, I usually work with a developer to implement what I've come up with.
For this particular project I also did the hi-fidelity design work and coded the CSS & HTML in a new Git branch. This code was then handed off to a developer to further build the functionality based on all the outlined use case scenarios.
I love and cherish the fact that I can postively impact real people who depend on our product.
After every customer facing project I keep my eye on Twitter to see what people are saying.
The responses were pretty awesome:
Due to the size of this project and considering it was a universal design refresh we were forced to release this as part of a larger project.
Moving forward I've learned to avoid this by being stricter about minimizing and managing my changesets. This not only provides a safer deployment with minimal risk for the company but it also lends itself better toward iterative improvement and smoother deployment.
This was the first project at AWeber that we really did everything in our power to immerse the entire company in the changes.
Throughout the design process I was constantly meeting with members from multiple departments, gathering feedback, digging through usage statistics, sharing my findings in mini presentations, etc.
In the final weeks, I put together a presentation that spoke to each change and the research supporting it and personally gave it to every member of our Customer Solutions team over a period of three days. I'll never forget the moment two of our guys high fived when I announced that we are getting rid of our Queue system. (...yeah, we're nerds!)
We also made sure customer solutions team had multiple weeks to test out the new design via our beta pool and provide feedback through an internal tool.
I also put together a PDF that further outlined all the design decisions, showing the old design in comparison to the new design.
The way our team came together to release these changes was nothing short of amazing and I was lucky to be part of it.
As we all know in software development, nothing is as easy as it initially seems. This project really would have benefitted from a proper spec for each edge case scenario we encountered.
We also ran into some issues when moving from our virtual test environment to using live production data that probably could have been avoided with more planning.
As part of this project we ended up removing raw statistics in favor of percentage based stats because they were easier for comparison purposes. During both our user research benchmarking sessions as well as our usability sessions with the new design, there wasn't the slightest rejection.
Since this release, we have iterated on the design and it now includes a toggle which allows customers to change between raw and percentage based statistics.