Journal tags: bandwidth

4

sparkline

Speculation rules and fears

After I wrote positively about the speculation rules API I got an email from David Cizek with some legitimate concerns. He said:

I think that this kind of feature is not good, because someone else (web publisher) decides that I (my connection, browser, device) have to do work that very often is not needed. All that blurred by blackbox algorithm in the browser.

That’s fair. My hope is that the user will indeed get more say, whether that’s at the level of the browser or the operating system. I’m thinking of a prefers-reduced-data setting, much like prefers-color-scheme or prefers-reduced-motion.

But this issue isn’t something new with speculation rules. We’ve already got service workers, which allow the site author to unilaterally declare that a bunch of pages should be downloaded.

I’m doing that for Resilient Web Design—when you visit the home page, a service worker downloads the whole site. I can justify that decision to myself because the entire site is still smaller in size than one article from Wired or the New York Times. But still, is it right that I get to make that call?

So I’m very much in favour of browsers acting as true user agents—doing what’s best for the user, even in situations where that conflicts with the wishes of a site owner.

Going back to speculation rules, David asked:

Do we really need this kind of (easily turned to evil) enhancement in the current state of (web) affairs?

That question could be asked of many web technologies.

There’s always going to be a tension with any powerful browser feature. The more power it provides, the more it can be abused. Animations, service workers, speculation rules—these are all things that can be used to improve websites or they can be abused to do things the user never asked for.

Or take the elephant in the room: JavaScript.

Right now, a site owner can link to a JavaScript file that’s tens of megabytes in size, and the browser has no alternative but to download it. I’d love it if users could specify a limit. I’d love it even more if browsers shipped with a default limit, especially if that limit is related to the device and network.

I don’t think speculation rules will be abused nearly as much as client-side JavaScript is already abused.

The Weight of the WWWorld is Up to Us by Patty Toland

It’s Patty Toland’s first time at An Event Apart! She’s from the fantabulous Filament Group. They’re dedicated to making the web work for everyone.

A few years ago, a good friend of Patty’s had a medical diagnosis that required everyone to pull together. Another friend shared an article about how not to say the wrong thing. This is ring theory. In a moment of crisis, the person involved is in the centre. You need to understand where you are in this ring structure, and only ever help and comfort inwards and dump concerns and problems outwards.

At the same time, Patty spent time with her family at the beach. Everyone reads the same books together. There was a book about a platoon leader in Vietnam. 80% of the story was literally a litany of stuff—what everyone was carrying. This was peppered with the psychic and emotional loads that they were carrying.

A month later there was a lot of coverage of Syrian refugees arriving in Europe. People were outraged to see refugees carrying smartphones as though that somehow showed they weren’t in a desperate situation. But smartphones are absolutely a necessity in that situation, and most of the phones were less expensive, lower-end devices. Refugeeinfo.eu was a useful site for people in crisis, but the navigation was designed to require JavaScript.

When people thing about mobile, they think about freedom and mobility. But with that JavaScript decision, the developers piled baggage on to the users.

There was a common assertion that slow networks were a third-world challenge. Remember Facebook’s network challenges? They always talked about new markets in India and Africa. The implication is that this isn’t our problem in, say, Omaha or New York.

Pew Research provided a lot of data back then that showed that this thinking was wrong. Use of cell phones, especially smartphones and tablets, escalated dramatically in the United States. There was a trend towards mobile-only usage. This was in low-income households—about one third of the population. Among 5,400 panelists, 15% did not have a JavaScript-enabled device.

Pew Research provided updated data this year. The research shows an increase in those trends. Half of the population access the web primarily on mobile. The cost of a broadband subscription is too expensive for many people. Sometimes broadband access simply isn’t available.

There’s a term called “the homework gap.” Two thirds of teachers assign broadband-dependent homework, while one third of students have no access to broadband.

At most 37% of people have unlimited data. Most people run out of data on a frequent basis.

Speed also varies wildly. 4G doesn’t really mean anything. The data is all over the place.

This shows that network issues are definitely not just a third world challenge.

On the 25th anniversary of the web, Tim Berners-Lee said the web’s potential was only just beginning to be glimpsed. Everyone has a role to play to ensure that the web serves all of humanity. In his contract for the web, Tim outlined what governments, companies, and users need to do. This reminded Patty of ring theory. The user is at the centre. Designers and developers are in the next circle out. Then there’s the circle of companies. Then there are platforms, browsers, and frameworks. Finally there’s the outer circle of governments.

Are we helping in or dumping in? If you look at the data for the average web page size (2 megabytes), we are definitely dumping in. The size of third-party JavaScript has octupled.

There’s no way for a user to know before clicking a link how big and bloated the page is going to be. Even if they abandon the page load, they’ve still used (and wasted) a lot of data.

Third party scripts—like ads—are really bad at dumping in (to use the ring theory model). The best practices for ads suggest that up to 100 additional HTTP requests is totally acceptable. Unbelievable! It doesn’t matter how performant you’ve made a site when this crap gets piled on top of it.

In 2018, the internet’s data centres alone may already have had the same carbon footprint as all global air travel. This will probably triple in the next seven years. The amount of carbon it takes to train a single AI algorithm is more than the entire life cycle of a car. Then there’s fucking Bitcoin. A single Bitcoin transaction could power 21 US households. It is designed to use—specifically, waste—more and more energy over time.

What should we be doing?

Accessibility should be at the heart of what we build. Plan, test, educate, and advocate. If advocacy doesn’t work, fear can be a motivator. There’s an increase in accessibility lawsuits.

Our websites should be as light as possible. Ask, measure, monitor, and optimise. RequestMap is a great tool for visualising requests. You can see the size and scale of third-party requests. You can also see when images are far, far bigger than they need to be.

Take a critical guide to everything and pare everything down. Set perforance budgets—file size budgets, for example. Optimise images, subset custom fonts, lazyload images and videos, get third-party tools out of the critical path (or out completely), and seek out lighter frameworks.

Test on real devices that real people are using. See Alex Russell’s data on the differences between the kind of devices we use and typical low-end devices. We literally need to stop people in JavaScript.

Push the boundaries. See the amazing work that Adrian Holovaty did with Soundslice. He had to make on-the-fly sheet music generation work on old iPads that musicians like to use. He recommends keeping old devices around to see how poorly your product is working on it.

If you have some power, then your job is to empower somebody else.

—Toni Morrison

Context

I swear there’s some kind of quantum entanglement going on between Ethan’s brain and mine. Demonstrating spooky action at a distance, just as I was jotting down my half-assed caveat related to responsive design, he publishes a sharp and erudite explanation of what responsive design is and isn’t attempting to do. He uses fancy learnin’ words and everything:

When I’m speaking or writing about responsive design, I try to underline something with great, big, Sharpie-esque strokes: responsive design is not about “designing for mobile.” But it’s not about “designing for the desktop,” either. Rather, it’s about adopting a more flexible, device-agnostic approach to designing for the web. Fluid grids, flexible images, and media queries are the tools we use to get a bit closer to that somewhat abstract-sounding philosophy. And honestly, a more unified, less fragmented approach resonates with my understanding of the web on a fairly profound level.

Meanwhile Mark has written a beautiful encapsulation of the sea change that responsive design is a part of:

Embrace the fluidity of the web. Design layouts and systems that can cope to whatever environment they may find themselves in. But the only way we can do any of this is to shed ways of thinking that have been shackles around our necks. They’re holding us back.

Start designing from the content out, rather than the canvas in.

Both Ethan and Mark are writing books. I can’t wait to get my hands on them.

In the meantime, I wanted to take an opportunity to clear up some misunderstandings I keep seeing coming up again and again in relation to responsive web design. So put the kettle on and make a nice cup of tea while I try to gather my thoughts into some kind of coherence.

Breaking it down

To paraphrase , web design is filled with quite a few known unknowns. Here are three of them:

  1. Viewport: the dimensions of the browser that a person uses to access your content.
  2. Bandwidth: the speed of the network connection that a person uses to access your content.
  3. Context: the environment from which a person accesses your content.

Viewport

With the proliferation of mobile devices, tablets and every other kind of browsing device imaginable, there’s a high number of possible viewport sizes. But here’s the thing: that’s always been the case.

For over a decade, we have pretended that there’s a mythical perfect size that every person will be using. To start, that size was 640x480, then it was 800x600, then 1024x768 …but this magical ideal dimension was always a phantom. People have always been visiting our websites with browsers open to varying dimensions of width and height—the rise of “mobile” has simply thrown that fact into sharp relief.

The increasing proliferation of different-sized devices and browsers means that we can no longer cling to the consensual hallucination of the “ideal” viewport size.

Fortunately this is a solved problem. Liquid layouts were a good first step. Once you add media queries into the mix it’s possible to successfully deal with a wide range of viewport sizes.

Simply put, responsive web design solves the viewport question.

But that’s just one of three issues.

Bandwidth

Using either media queries or JavaScript, we can test for a person’s viewport size and adapt our layouts accordingly; there is no equivalent test for the speed of a person’s network connection.

This sucks.

The Filament Group are experimenting with responsive images and I hope to see a lot more experimentation in this area. But when it comes to serving up different-sized media to different people, we are forced to make an assumption. The assumption is that a small viewport equates to narrow bandwidth.

‘Tain’t necessarily so. If I’m using my iPod Touch I’m surfing with a fairly small screen but I’m not doing it over 3G or Edge—same goes for anyone idly browsing on the iPhone or iPad on their work or home connection.

Likewise, just because I’m using my laptop doesn’t mean I’m connected with a fat pipe. When I took the train from Seattle to Portland there was WiFi available …of a sort. And many’s the hotel connection that pushes the boundaries of advertising itself as “high speed.”

Once again, the “solution” to this problem for the past decade has been to ignore it. Just as with viewport size, we engage in a consensual hallucination of ideal bandwidth. Just as with viewport size, the proliferation of new devices is highlighting a problem that was always there. Unlike viewport size, the bandwidth issue is a much tougher nut to crack.

Responsive web design doesn’t directly solve the bandwidth question. I suspect that the solutions will involve a mixture of server-side and client-side trickery, most likely involving clever for nice-to-have content. I’ll be keeping an eye on the work of Steve Souders.

In the meantime, the best we can do is stop assuming a best-case scenario for bandwidth.

Context

You don’t know what a person is doing when they visit your website. It’s possible to figure out what viewport size they are accessing your content with and it might even be possible to figure out how fast their network connection is but short of clairvoyance, there’s no way of knowing whether someone is in a hurry or looking to spend some time hanging out on your site.

Once again, this has always been the case. Once again, we have up ‘till now ignored the problem by pretending the person visiting our website—the same person with the perfect viewport size and the fast internet connection—doesn’t mind being served up dollops and dollops of so-called “content”, very little of which is directly relevant to them.

The rise of services like Readability and Safari’s Reader mode demonstrate that the overabundance of page cruft is being interpreted as damage and routed around.

The context problem—figuring out what a person is doing at the moment they visit a site—is really, really hard.

Responsive web design does not solve the context problem. It doesn’t even attempt to. The context problem is a very different issue to the viewport problem—which responsive web design does solve. As Mark put it:

It’s making sure your layout doesn’t look crap on diff. sized screens. Nothing more.

He was responding to Brian and Kevin who I think may have misunderstood the problems that responsive web design is trying to solve. Brian wrote:

anyone that claims “responsive design” as a best practice clearly has never actually tried to support multiple contexts or devices.

Those are two different issues: contexts and devices. The device issue breaks down into viewport size and bandwidth. Responsive design is certainly a best practice when tackling the viewport issue. But Brian’s right: responsive design does not solve the problem of different contexts. Nor did it ever claim to.

As I’ve said before:

The choice is not between using media queries and creating a dedicated mobile site; the choice is between using media queries and doing nothing at all.

If responsive design were being sold as a solution to the context problem, I too would be annoyed. But that’s not the case.

The mythical mobile context

As with the viewport issue and the bandwidth issue, the context issue—which has always been there—is now at the fore with the rise of mobile devices. As well as trying to figure out what a person wants when they visit a website, we now have to think about where they are, where they are going and where they have just been.

This is by far the toughest problem.

Bizarrely, this is the very known unknown that I see addressed as though it were solved. “Someone visits your site with a mobile device therefore they are in a rush, walking down the street, hurriedly trying to find your phone number!”

Really?

The data does not support this. All those people with mobile devices sitting on a train or sitting in a cafe or lounging on the sofa at home; they are all in a very different context to the imaginary persona of the mobile user rushing hither and thither.

We have once again created a consensual hallucination. Just as we generated a mythical desktop user with the perfect viewport size, a fast connection and an infinite supply of attention, we have now generated a mythical mobile user who has a single goal and no attention span.

More and more people are using mobile devices as a primary means of accessing the web. The number is already huge and it’s increasing every day. I don’t think they’re all using their devices just to look up the opening times of restaurants. They are looking for the same breadth and richness of experience that they’ve come to expect from the web on other devices.

Hence the frustration with mobile-optimised sites that remove content that’s available on the desktop-optimised version.

Rather than creating one site for an imaginary desktop user and another for an imaginary mobile user, we need to think about publishing content that people want while adapting the display of that content according to the abilities of the person’s device. That’s why I’m in favour of universal design and the One Web approach.

That’s also why responsive web design can be such a powerful tool. But make no mistake: responsive web design is there to help solve the viewport problem, not the context problem.

Update: Of course the usual caveat applies. Also, here’s some clarification about what I’m sugguesting.

Simple Storage Service

Amazon’s new S3 service looks very interesting indeed. At first glance, it just looks like a very cheap way of storing and retrieving files — which it is — but the really fascinating aspect is that there is no user interface. It is purely a web service. As Sam Newman says:

When you get down to it, Amazon S3 is simply a large, distributed hash map with an API. Unless people build applications on top of it, it’s useless.

The creators of S3 have gone out of their way to keep the architecture as simple as possible. This is a smart move. I’m a great believer in the power of stupid networks.

Leaving aside the underlying technology, S3 is good news in purely practical terms. If nothing else, this should start a price war for data storage. Yet another barrier to entry has been lowered for anyone looking to publish anything online. Odeo and YouTube are good for audio and video respectively, but the agnostic nature of S3 means that you can store and stream on your own terms.

Hardware has been getting cheaper and cheaper for some time. Now it looks like bandwidth is heading the same way (for some amusing anecdotes on bandwidth issues, be sure to listen to Bernie Burns’ keynote from SXSW).

I’m looking forward to playing around with S3. For a service with no face, it sure looks like it’s got legs.