Roku Speakers and Audio
Investigating users' relationship with audio and how they manage it on their TVs
Roku was seeking to help its users to get the most out of their sound. As new sound devices were added such as speakers, a soundbar, and a subwoofer, the team wanted to ensure that users were hearing each device at it's best for the given media—whether it is a suspenseful movie, a hilarious cat video, or a booming new album.
In this role, I performed market and competitive research to develop recommendations for the information architecture of audio settings.
Roku was originally known as an OS that brought Netflix, Prime Video, and Hulu to televisions, it now supports the streaming of over 3,000 channels. Users do not just go to their Roku TV to watch television, but also might pair it with their Roku speakers to steam Spotify and create ambience for their dinner party. With this large variety of media, dynamic audio is needed—and provided—to get the most out of their device. However, many Roku users leave their audio settings in their default mode, limiting the vibrance that they could experience. We wanted to see how we could help.
As this was my first assignment at Roku, it was important for me to familiarize myself with the brand, interaction, and patterns before I jumped in to trying to resolve the issue. To accomplish this, I created a custom heuristic evaluation process and template to analyze each screen and step with a fine-toothed comb. Basing the evaluation off of Norman and Nielsen's guide as it is often seen as a standard for interaction, I modified it to accommodate the unique traits of our product. This included additional consideration for distance, the typical in-home environment and conditions, navigation via remote control and directional pad, accessibility, voice interaction, and the Roku brand.
I then used my guide to walk through setup for several of Roku's products and peripherals, taking screenshots and making notes of each step along the way. I then presented my findings to the greater UX team to get a better understanding of some of the interactions and previous decisions made. I also shared the process of the evaluation for posterity and future replication for other products and other designers.
What it Offers
To reassess the construction, I downloaded the open-source template for the OTTO DIY+ model, a slightly larger shell, and printed the design at Georgia Tech. The online promotion boasted a <9 hour production process, but gave no indication of what settings or machine was used to achieve those times. Using standard layer definition and support structures, it took our Ultimaker 3 printer just over 23 hours to print.
This file provided more structure for electronics to fit in to and provided better tolerances for placement, however it was still tight once all of the wires were connected. Additionally, the production required $25 in plastics for one robot. This would be impractical for educators that want to provide multiple units for their classes.
To cut down production costs and time, I turned to alternative materials. I sourced a template in the OTTO community that formed the shell through a series of interlocking jigsaw pieces. This afforded the use of a laser cutter for production. Flat pieces also enable the possibility for lower cost shipping if necessary.
Balsa Wood (version 1)
I first experimented with 1/8" balsa wood that can be easily sourced from most art supply stores. The total material cost used in the cut was $2.50, 1/10 the cost of the 3D model. It took a laser cutter roughly 20 minutes to create the pieces and another 1.5 hours to assemble with wood glue, reducing the total production time by 90%. A wooden shell is also decently robust and affords the ability to paint or draw on the body, something children are likely to be excited about.
Next, I cut the same pattern out on 1/8" chipboard. This model cost $2.50 in materials as well. Total production time added up to 1 hours 20 minutes with the laser cutter and rubber cement bonding. Chipboard shells are lighter than wooden ones and can also be drawn on, but are more susceptible to water damage and stains that may occur in a classroom setting. Points of tension where servos touch are also more likely to wear over use.
Acrylic (version 1)
The third material used was 1/8" acrylic plexiglass. This comes in at $10 a shell, but with key features others do not. Plexiglass cuts much quicker and with a lower kerf than combustible materials like wood and chipboard so you can get much sharper fidelity. This attempt took 6 minutes to cut and 1.5 hours to assemble using Plastruct bondene. The acrylic shell is very robust and can withstand impact, water, and cuts. It is also useful as an educational material as students would be able to view the servos in action and see how they interact with the inner system. It should be noted, however, that bondene can be harmful if it comes into contact with the eyes or swallowed so it is not recommended for use by children.
Balsa Wood (version 2)
Wanting to explore the possibilities of balsa wood more, I found a plan that promoted adhesive-free assembly. If it worked, this method should provide a safe and efficient assembly process for children. To do this, the tabs on pieces must fit very tightly to hold in place by tension. However, balsa wood has a hard to predict kerf value (material lost to the laser) and the laser makes conical cuts, making the fittings deviate. Ultimately, I bonded the model with wood glue.
Acrylic (version 2)
For comparison, I modified the file to match the appropriate kerf values needed to fit acrylic pieces together. Acrylic burns much less material than balsa wood and requires fewer passes with the laser, resulting in a more accurate cut. I was successfully able to assemble an adhesive-free shell, but the bond is still tenuous. Should a child drop the robot, it is likely that it would collapse. Another quality of acrylic is that it is more brittle than wood and is apt to snap when pieces are being snapped together.
I am currently awaiting the arrival of additional electronics to test installation in each shell for a more rounded evaluation. I will also want to explore etching labels into each piece to facilitate the creation of an assembly guide for others to follow.
One of the methods developed to introduce code to early education is the use of block coding. MIT and Google developed open-source platforms such as Scratch and Blockly that turn code into visual puzzle pieces. These blocks do not require any knowledge of a language and make the interface generally more fun. Lego Mindstorms and MakeBot have adopted this method to code their products.
There are actually numerous platforms for teaching coding to children through block-based programming. However, there appears to be a hole in their uses. Many programs such as Scratch, ScratchJR, Kittenblock, Snap!, etc are primarily used to teach coding through playing 2D games. These platforms are incapable of transferring the code outside of their platforms and therefore cannot export the code to a robot. Platforms that do support robot coding often only support the company's own brand such as mBot and Lego. This would not be a problem if the robots ran $100+ each.
Although many of these platforms are geared directly towards children, they can have busy interfaces, complicated wording, and hard to discern colors that could be distracting or unintelligible for many children, particularly those with certain cognitive or visual impairments.
I am conducting heuristic and functional analyses on the currently available platforms and will develop benchmarks necessary for the platforms to be used at a disadvantaged program. These include: free of cost, flexible compatibility, accessibility features, and ability to operate without internet. Additional benchmarks will be created and added upon continued research with schoolchildren and educators.