Dec 6th, 2013 by Ryan Gantz (@sixfoot6), Director of UX, Vox Media
One great thing about working at a company built on Lifestyle Brands With Passionate Audiences is that many of us enjoy a huge overlap between our job and our hobbies. We get our work done, but there’s a (largely unspoken) understanding that when Apple events or World Series games or E3 announcements come around, we can and should take time to enjoy them. No doubt World Cup draw coverage has been up on the projector in the DC office common area today, folks plunked down on the couch.
We love this stuff, and it’s always a great chance to see our editorial & video teams in action, and to watch closely as tools like SBN Live and Syllabus get real-world use.
But one terrible thing about working for a growing family of premium brands is that we’re always publishing awesome shit, and it’s really, really hard to keep up. Some highlights from Wednesday:
Those were just the longform features we published that day. These tabs reveal deep coverage of fascinating stories that required tight collaboration between editorial, design, sales, ad products and video. And it’s too much! I don’t mean to complain in a smug #CompanyHumbleBrag way (well, I sorta do), but rather in an actual selfish Seriously, Guys, I’m Having a Hard Time Focusing On My CSS Because Of These Tabs kind of way.
I want to keep up with our output so I can better understand our audiences, our process, our design choices, and subject matter relevant to the modern world. I’m glad Product tools help enable this stuff. But all too often I only have time to scan the layout, tweet some props to the writer and the team, and capture the story in Pocket so I can take a closer look later. Sadly, my track record for following up isn’t so strong, and I often feel bad about that.
Our sites publish stuff that people want to read & watch. Just this minute, folks in our main Campfire room are talking about this Curbed article on DC apartments. If I worked at a lousy temp job, I’d be in heaven with the distraction. As it is, sometimes the only articles I take time to read are the ones that come back around to me via friends on Facebook. To say nothing about engaging with other online communities, or consuming inspiring work published elsewhere.
So I’d love to hear both from Voxers and folks working in the wider world of media and tech: how do you manage to keep up? Should I be opening my laptop to read late at night, with a glass of bourbon? Bookmarking and watching our videos over the weekend? Doubling down on GTD? Going off the grid to live in the jungle? BECAUSE IT NEVER STOPS.
By David Zhou (@dz), Software Engineer, Vox Media
The Polygon PlayStation 4 Review and Xbox One Review involved an unprecedented level of coordination between the editorial and product teams at Vox Media. The goal was to create a pair of extremely high touch features to highlight the talents of our writers and video team, while pushing the envelope on longform design.
There were a lot of lessons learned, but the final results speak for themselves.
Initially, we did not consider SVG when approaching the design of the reviews. But we soon realized that the SVG format offers the ability to have delicate line art due to its vector and path capabilities, making it a great fit for our needs. And, as it turns out, not just our aesthetic needs, but our technical needs as well.
Polygon, as a site, is designed to be responsive. However, standard image formats like
png don’t don’t always perform well when asked to enlarge or shrink dimensions as dictated by the user’s browser size. SVGs, on the other hand, take on responsive properties perfectly: vectors can increase or shrink to arbitrary sizes without any loss of fidelity, and animations and operations done on SVG elements adjust relative to its size without any additional work.
But before we could use SVGs, they needed to be created. There were no easy preexisting SVGs of the two consoles that we could grab. They needed to be designed from scratch.
From Illustrator to SVG
Polygon’s designer, Tyson Whiting, painstakingly traced paths to create line art from real life photos of the two consoles. Doing this in Adobe Illustrator is relatively straightforward, though it is somewhat mind-numbing.
After exporting a test line art tracing done in Illustrator for the first time, however, there were a couple issues that needed to be addressed:
Many of the colors, fills, and stroke widths Tyson applied inside Illustrator were exported as inline attributes and styles on the SVG elements.
The actual SVG tags used to replicate Illustrator objects (eg,
rectangle) were not clear, and many times not the one we wanted.
The solution was remarkably simple in its directness, if not in its cleverness: just manually massage Illustrator’s exported SVG files.
Inline styles were moved to a centralized stylesheet that affected all SVGs on the page, and we learned through trial and error what attributes were used by Illustrator to determine which SVG tags were chosen.
A factory line was developed: Tyson, Polygon’s designer, would create and export the SVGs in Illustrator; Ally Palanzi, an intern at Vox Media, would manually go into the SVG and add
group tags and comments to clearly label which elements did what; and finally, I would place and animate SVGs wherever needed.
It wasn’t the fastest process, but ultimately we had a set of well-documented and clean SVGs to use.
Visually controlling SVG paths
The act of “drawing” in an SVG is optical trick caused by manipulating two SVG
path is a single, continuous line described by coordinates contained inside the element. It looks something like this:
<path d="m 160.60001,198 782.49997,0" />
Animating that path to draw itself is fundamentally an illusion created by adjusting that path’s
stroke-dasharray property is a whitespace or comma delimited list that controls the dashes and gaps that make up the path. When set to
"10 10", for example, the
path's stroke alternates in a dash of
10 and then a gap of
10. Extrapolating this idea, if the total length of the
path was 10, then that would mean the first “dash” is as long as the
Specifically, this means that there is no visual difference between a path that has no
stroke-dasharray set, and a path with a
stroke-dasharray set where the dash is equal to the
path's total length. Conversely, this also means that the gap after the dash is also the length of
Why is this important?
A second property,
stroke-dashoffset controls the offset of the dashes in the
path. The offset, in this case, means where along the path the first “dash” of the dotted dash created by
stroke-dasharray starts. The default value for
stroke-dashoffset is 1 — meaning, it starts immediately.
What does it mean when
stroke-dashoffset is set to the length of the
It means that the dotted dash starts an entire
path's width away. Or put another way, the gap between the dashes defined by
stroke-dasharray fills the entire length of the
path. Or put in even simpler terms, the
path is visually invisible.
Combining these two properties means that we can control how much of the
path is showing, in a visual sense.
The browser is still drawing and rendering the entire
path. However, due to the placement and length of the dashes and gaps, a
stroke-dasharray equal to the length of the
path combined with
stroke-dashoffset's of varying lengths will create the visual illusion of a partial stroke.
SVG path animation
As of this post, there is not a reliable way to determine the specific moment when a CSS transition has finished. It’s possible to fake it by using a
setTimeout equal to the duration of the transition, but by doing so, it’s taken on faith that the transition wouldn’t be slightly slower or faster than the desired duration.
In any case, animating the property with either the CSS or the interval approach works something like the following pseudocode:
for every <path> in an <svg>:
store the results of getTotalLength()
set stroke-dasharray equal to totalLength + " " + totalLength
set stroke-dashoffset equal to totalLength
animate method will differ based on the approach used to change the values of
stroke-dashoffset. With a CSS transition approach,
animate would set
transition to something like:
stroke-dashoffset 2s ease-in-out
Then, after setting
stroke-dashoffset to 0 for each
path, the browser will automatically animate that value change via the specified CSS transition.
As mentioned above, this a great and performant way of doing things if the specific timing of when the transition finishes is not important. This gets tricky, however, if you want to chain together animations — or have things happen only after specific animations are finished.
But, there’s one major problem with this approach: the
draw method fires in ignorance of the browser’s render cycle, resulting in needless calls. In practical terms, the browser is doing more work than it needs to, thereby possibly lowering the FPS of the browser’s rendering, and making the animation appear slow.
requestAnimationFrame. There are great resources out around the web already explaining the hows and whys of
Using requestAnimationFrame, the above code now looks something like this:
Duration based vs frame-count based animation
There are two primary ways to determining how long an animation should run; an explicit duration, or a flexible frame count. There are benefits to each approach. The SVG animations on Polygon’s reviews primarily used the frame-count approach rather than setting an explicit animation duration.
Duration-based animation is the approach the code chunk above uses to animate a path. Given a desired duration (in the above case, 2000ms), the animation should run for exactly that duration. “Frames” are calculated by the proportion of elapsed time as compared to the total duration.
This has one big drawback: on slower computers, FPS dips result in skipped frames.
Imagine that in an ideal 60 FPS scenario,
requestAnimationFrame is called 60 times per second. This is fine. That means the animated SVG path’s
stroke-dashoffset is being changed 60 times a second — more than enough for it to appear smoothly animated to the human eye. However, if there’s a dip in FPS — say your computer’s hard drive suddenly starts thrashing and everything slows — then suddenly
requestAnimationFrame is only being called 15 times a second. And because we use elapsed time as the determining factor for
stroke-dashoffset, the visual result is one of dropped frames.
The animation appears to skip around.
The alternate approach, then, is to stop using elapsed time and use a different metric. In a frame-count strategy, we rely on the fact that the browser will self-impose a FPS cap, and simply animate
stroke-dashoffset based on the amount of frames elapsed. The above code chunk would look something like this:
Done this way, if there is ever an FPS dip, the visual impact on the animation is that of a slowdown, rather than a frame skip.
And in the case of Polygon’s PS4 and Xbox One SVG animations, it was deemed that the smoothness of the animations was more important than the specific amount of time that the animations took to run.
Stay tuned for more news on Metronome!
Lastly: Come work with Vox Product! We are hiring.
By Blake Thomson, Software Engineer, SB Nation
cfbot get tableflip
For starters, I had a great experience with using it to power Syllabus (Chorus’ liveblogging tool), and knew that the basic requirements for Syllabus were the same as for comments: a live updating list of short posts.
Collections, Models, and Views
Distinct collection, model and view objects separate data manipulation (in the collection + model) from DOM manipulation (in the view), which used to be all mixed up in the same place.
Declarative events isolate the selectors used by js for binding events. Instead of scattering these selectors inside functions and nested callbacks, having a single point of reference makes it easy to refactor markup and styles safely.
The Backbone.Events mixin, used to create an event-driven architecture, helps to decouple sections of code with separate concerns. Removing direct references between unrelated sections of code allows reasoning about smaller units of functionality at one time.
With these patterns in mind, I started writing code. I began with just a model and a collection, wiring them up to load data from the server, and inspect it in the browser console. I was able to write unit tests to verify this was happening correctly. Then I gradually built view upon view, adding model and collection functionality as required, until I had feature parity with the old system.
I ended up with 25 files (3 collections, 3 models, 17 views, 1 mixin, and 1 main), and 2000 lines of code. Slightly up from 1800 lines from before, but much more manageable with the largest file being 400 lines (the comment model).
In addition to the above three patterns afforded by Backbone.js, I came up with several new patterns to help keep my code organized and DRY.
Killing boilerplate view code with compileTemplates()
In the main initialization code, I call a compileTemplates() function that loops through every view in a namespace, translating the templateId property of each into a template function. This makes views more DRY by extracting boilerplate template setup code.
Replacing Backbone.sync for working around an existing non-RESTful API
Because of an existing non-RESTful API, it was difficult to wire up my comment model and collection to use Backbone.sync for fetching and saving models. So I decided instead to abandon Backbone.sync completely, and write an `ajax` method to provide a simple wrapper around `jQuery.ajax`.
Because I needed similar functionality in both models and collections, I wrote a mixin that both could use: https://gist.github.com/thomsbg/6527302. This mixin provides several functions and configuration hooks for working with ajax requests:
- Objects including this mixin can specify a `urls` property, which the `urlFor` function will use to translate an action name to a URL, using Rails-style /:bound/path/:segments.
- Objects may also specify an `ajaxOptions` property, which provides default options to pass to `jQuery.ajax`. For example, this is used in one model to specify that all ajax requests should use the ‘POST’ method.
- When using the `ajax` method brought in with this mixin, some default events are fired, based on the action name passed as the first argument. i.e. calling `this.ajax(‘foo’)` triggers the `fooStart`, `fooError`, and `fooSuccess` events, as appropriate. Generic-named events are fired on a global event mediator as well: ‘ajaxStart’, ‘ajaxError’, and ‘ajaxSuccess’.
- Specifying the error and success settings in the options to `this.ajax` allows those callbacks to run first, before the applicable event is fired.
As a full example, inside of a model you can call `this.ajax` with an action name, callbacks, and other options: https://gist.github.com/6541380.
Optimizing the event-binding bottleneck on large comment threads
A basic solution to rendering a list of comments is to instantiate and render a collection of Backbone view objects, one per comment. Ordinarily, every Backbone view calls delegateEvents() when it is initialized, which handles binding event handlers based on the contents of the `events` object. When rendering a massive collection of comments, doing this event binding once per comment slows things down. I used console.profile() to determine that it took around 1-2 seconds to bind events for a list of 500 comments (5-10ms for each call to delegateEvents()).
One way to solve this would be to just not use separate view objects for each comment, and have one super view that handles events for the entire comment list. But for code clarity, I wanted to keep the concerns of rendering a list of comments separate from those of rendering and reacting to events of a single comment. So instead I performed some trickery to bind events on the list once, while keeping the event definition in the child: https://gist.github.com/thomsbg/6527888.
This code redefines delegateEvents() on the per-comment child view to be a no-op, while supercharging delegateEvents() on the parent view in charge of rendering the entire list. When an event happens inside of a child, it eventually bubbles up to the parent. The parent is able to work out which child view object should respond to the event based on information in the DOM (id attribute), and call the appropriate method on that object. With this optimization, I saved that 5-10ms for each comment rendered, helping large threads render 1-2 seconds faster.
I had a couple surprises working with Backbone this time around. First of all, Backbone.sync doesn’t support more than the standard CRUD actions, but my API has separate endpoints for different update operations (recommend, flag, edit, etc). You could make a case to restructure these operations into sub-resources, but that was beyond the scope of this project. I expected the Backbone.sync mechanism to be little more flexible. Thus, my custom jQuery.ajax wrapper was born.
Secondly, I was surprised to find that Collection add events fire in the order of the array passed to collection.add(), not in sorted order. This made adding element views to a sorted collection tricky. Instead of simply appending views to a parent element, the appropriate index for each view added needs to be found, and used with insertBefore().
I initially called e.preventDefault() inside event handler methods, but sometimes I wanted to call those methods without an event parameter (such as from the browser console). I changed them all to return false instead, to allow them to be called without any arguments. This made it easy to explore and trigger methods manually via the console.
I purposefully tried not to use 3rd party libraries (such as marionette), because I was worried about too many abstractions getting in the way. I eventually re-invented some features I could have gotten for free (e.g. `compileTemplates()`), but it was worth it to learn deeply about the framework, and to have complete control over my code.
It’s still easy to introduce coupling when using Backbone. I rigorously kept all model-to-view references out of my code, but it would have been very easy to introduce them. Doing so would have defeated the purpose of my refactor, but another developer (perhaps future me) might want to in order to fix a bug or add a feature. cfbot gif me sad
Working with an API that returns underscored_attribute_names is jarring when working with a snakeCase coding convention for variable and function names. However, I grew to like the visual distinction between server-generated attributes, and client side variables.
Writing tests seemed like a chore at first, but grew to be a gratifying way to work with the code I had written, validating thought experiments and keeping me sane when refactoring. Code coverage went from 0% before with no tests at all to ~50% afterwards, with 60 tests and 150 assertions.
Because each vertical uses its own templates and options to control the appearance and features in the comments library, writing effective, isolated tests for view objects didn’t seem worth it. Perhaps I just haven’t found the right pattern that allows for testing the right things yet.
Was it worth it?
Yes, a thousand times yes. Not only does this new code perform better than the old, it provides us with an extensible system for implementing new site features powered by comments. For example, it has already been used with great success to power SB Nation Live, a gameday chat room experience. Without this starting point to work from, completing that project would have been more painful, and taken much longer. Thanks Backbone!
We’ve been busy cranking, but wanted to share notes on the output of our Product Team hack week last month.
Vax 2.0 was a huge success. A total of 22 projects came out of the 4 days we were all in DC! On Sunday, the last day of Vax, various members from other parts of Vox Media joined the product team for a fine show and tell.
Justin led a team that explored the creation of a custom Roku app with some of Vox Media’s video content. Blake worked on a project called Dataclips that pulled little pieces of data—like post with the most comments, in order to help all the teams easily gather data about different aspects of their stories.
Warren and team worked on Voxomograph, a data-driven dashboard visualizer that reflected certain activities across all of our verticals in a beautiful, abstract way that intersects technology and digital art.
Tate and team worked on Ocupado, a meeting room scheduler/monitor that you can mount onto the wall outside of a meeting room. The average cost of a nice commercial unit is around $1200, the team made a functional unit that linked up to our Vox google calendar with a small monitor and a rasberry pi for under $100.
Lovitt and Skip gave Beacon a healthy dose of new features, including high traffic alerts, and big event warnings when scheduling a deploy.
Dylan and Uy explored opportunities to give a digital longform piece longevity while making the piece feel more intimate, incorporating non-intrusive sounds, video and graphics.
Dan worked on a Sass framework for prototyping in Chorus. He also started developing an inventory managing system for keeping track of our QA devices, Polygon’s video game library, and The Verge’s device library.
Brian (a first time Vaxer) and Scott created a real-time features design tool that allows a features designer like Scott to work locally, to quickly develop and iterate on features instead of coding blindly and having to manually trigger a preview in the browser.
Trei, Ted and Josh worked on evolving our tentpole feature products with a concentration in sports features for SBNation.
Jose used a Rasberry Pi to create an audio/visual light alert system that turns different colors and makes sounds if something has gone wrong with our server. It was great to see a hardware project, and the alert gave everyone an appropriate sense of urgency.
Kelsey and her team created a beautiful onboarding website for our HR needs, soon to appear on VoxMedia.com. It makes learning about Vox and our various tools and information portals easy and visually appealing.
Guillermo integrated a Chromecast into features, for an experimental but compelling second-screen viewing experience.
DZ worked on rethinking how snippets are created and integrated into our longform features design.
James and his team prototyped a beautiful print magazine with longform content from all Vox verticals, intended to work as a possible quarterly magazine.
Chris and his team created Gif Oracle, a function that integrates gifs into our commenting system, much as like how Campfirebot currently does in our internal chat rooms. And their work was deployed to production!
Jake designed and developed a hilarious JS game, featuring members of our team exploring a virtual office, squashing bugz, and consuming the all important food that keeps us going: tacos.
Tate concluded the presentations with a prototype of a potential commuter dashboard to surface bus time tables, delays in public transport, and the weather to folks in our east coast offices.
We took a break and ate chicken and kabobs, our last team dinner together. While we ate, the editorial leads from each vertical and also heads of sales and video spoke about 2014 plans and left time for Q&A, a valuable chance for vision sharing and transparency between teams.
A good hack was had by all.
By Chao Li
Day 3: hack hack hack! Developers, developers, developers! Scott and Brian get down to business.
Sticker time for new Polygon support manager Jon!
The night ended with some Big Buck Hunter and Halo 3 at the office.
Oh, and this:
Hello, we are the Vox Media product team. We are designers, developers, operations engineers, and product and community managers, based in Washington, DC, New York, and Austin, and distributed remotely in cities from Santa Barbara, California to Springfield, Missouri.