By John Gruber
Throughout the entire rumor cycle for this year’s new iPhones, we’ve been inundated with reports of two new screen sizes, 4.7 and 5.5 inches. But while the physical sizes of these displays leaked early and often, the exact pixel dimensions have not.
At this point, there’s too much smoke around the “two new iPhones, one at 4.7 inches and the other 5.5 inches” narrative for there not to be a fire. I think that’s what Apple is planning to announce next month — not because anyone “familiar with the plans” has told me so, but simply because so many parts have leaked corroborating these two sizes.
I’ve spent much of the last month trying to figure out the pixel counts for these displays, and it’s actually quite tricky. When Apple changed the iPhone display previously, they did so in obvious ways. With the iPhone 4’s retina display, Apple kept the physical size exactly the same (3.5 inches1) and exactly doubled the pixels-per-inch resolution (from 163 to 326). When the iPhone 5 increased the size to 4 inches and the aspect ratio (switching from 3:2 to 16:9), they simply added pixels vertically. Same pixels-per-inch resolution, same width (640 pixels), new height (1136 pixels instead of 960).
There is no similar “easy” way to do either a 4.7 or 5.5 iPhone display.
But after giving it much thought, and a lot of tinkering in a spreadsheet, here is what I think Apple is going to do:
@2x means the same “double” retina resolution that we’ve seen on all iOS devices with retina displays to date, where each virtual point in the user interface is represented by two physical pixels on the display in each dimension, horizontal and vertical. @3x means a new “triple” retina resolution, where each user interface point is represented by three display pixels. A single @2x point is a 2 × 2 square of 4 pixels; an @3x point is a 3 × 3 square of 9 pixels.
I could be wrong on either or both of these conjectured new iPhones. I derived these figures on my own, and I’ll explain my thought process below. No one who is truly “familiar with the situation” has told me a damn thing about either device. I have heard second- and third-hand stories, though, that lead me to think I’m right.
First, I’m assuming both the 4.7 and 5.5 inch rumors are true. Second, I’m assuming a 16:9 aspect ratio for both displays.2 Given these assumptions and the Pythagorean Theorem, it’s easy to create a spreadsheet model that gives you the pixels-per-inch resolution for a given pixel count.
For example, consider the 4.7-inch display. That’s the diagonal measured in inches. Given the height and width in pixels, we can solve for the diagonal in pixels (a2 + b2 = c2). Starting with values of 1334 and 750 for a and b, we get roughly 1530.4 pixels diagonally solving for c. Divide 1530.4 by 4.7 inches, and you get 326 pixels/inch — exactly the same pixel density as all previous iPhone retina displays (and the retina iPad Mini).
When I’ve written about this sort of diagonal pixel division in the past, I’ve gotten complaints that there’s no such thing as “diagonal pixels” or fractional pixels. But for our purposes, that doesn’t matter. Effectively we’re treating “pixel” as a unit of measure — the length of an actual pixel on the display. If you don’t believe me, we can use the Pythagorean Theorem the other way, to compute the length of the sides in inches given the diagonal in inches and the aspect ratio. A 16:9 display with a diagonal measuring 4.7 inches has a height of 4.1 inches and width of 2.3. 1334/4.1 and 750/2.3 both work out to roughly 326 pixels per inch. Trust the math.
Keeping the same 326 pixels-per-inch density would fit with the patterns Apple has established. They keep reusing the same pixel densities across iOS devices. In the past, I’ve speculated that this might be a matter of economy of scale — that they just cut different size displays from the same (large) sheets of LCDs. I no longer think that’s the reason. For one thing, they stick to these same pixel densities even across devices that use entirely different displays. The iPhone 5, retina iPod Touch, and retina iPad Mini all use 326 PPI displays, but all three use different LCD display technology.
I think the explanation has more to do with the physical size of user interface elements and touch accessibility. It’s about points. User interfaces in iOS aren’t specified in display pixels, they’re specified in virtual points. On non-retina devices, points and pixels were one and the same. But as stated earlier, on @2x retina devices, each (virtual) point represents two (physical) pixels on the display in each dimension.
Apple has always recommended tappable targets of at least 44 points. From the iOS HIG:
Make it easy for people to interact with content and controls by giving each interactive element ample spacing. Give tappable controls a hit target of about 44 × 44 points.
This recommendation is based not on aesthetics — how the controls look — but on the size of human fingertips. That’s the real constraint. 44 points isn’t a magic number; it’s a by-product of the size of the pixels on the original 2007 iPhone display (pre-retina, one point equaled one pixel). On every iPhone to date, from the original through the 5S and 5C, there has been a single consistent point resolution: 163 points per inch. That means a 44 × 44 point UI element has remained exactly the same physical size (as measured in inches or millimeters).
The original iPad introduced a second point resolution: 132 points per inch. This has remained consistent on the large (9.7 inch) iPads. The iPad Mini (both original and retina) uses the iPhone’s 163 points-per-inch resolution. Apple’s recommended target sizes did not change for the 9.7-inch iPad: they simply made the recommended tap targets physically larger. 44 points on an iPhone or iPad Mini is roughly 0.27 inches (6.9 mm). 44 points on a 9.7-inch iPad is 0.33 inches (8.4 mm). Everything is bigger by a factor of about 1.24.
Making UI elements (and text) bigger on the iPad works, both in terms of touch and visual accessibility. Making UI elements smaller, however, would not work, either physically (touch sizes) or visually (legibility). Any changes that Apple makes to iOS displays, in terms of physical dimensions or pixel resolution, necessitate extra work for app developers and designers to fully support. They’ve gone from @1x to @2x, and eventually (I think next month) they will go to @3x. They’ve introduced three different aspect ratios: 3:2 (now deprecated), 16:9 (all recent iPhones), and 4:3 (all iPads). They’ve used two different sizes of the same aspect ratio (iPad and iPad Mini).
But what they have never done, and I believe never will do, is redefine the virtual point to something other than 1/44th the recommended minimum tap target size for every device.
Apple has already started encouraging iOS developers to begin using adaptive layout techniques. See session 216 from WWDC 2014: Building Adaptive Apps with UIKit. What’s telling when you watch that session and read the documentation is that developers should clearly anticipate new aspect ratios (whether for new displays, or for a still-hypothetical but rumored split-screen multitasking feature on future iPads) and physical sizes, but nowhere is there any suggestion that the role of the point as the UI unit of measure is changing.
This is important when speculating about new iOS device displays. Take the physical pixels-per-inch of the display, divide by the retina factor (@1x, @2x, @3x, etc.), and you get the points-per-inch for that proposed display. If that result is higher than 163 points-per-inch, then a 44-point UI element would be undesirably small as a touch target for human fingertips. What we want are display resolutions that provide more total points but the same or fewer points-per-inch.
Now, in theory, Apple could go ahead and do this anyway, and on such a device developers would have to use an entirely different recommended size, in points, for UI elements. But effectively that would require double the design work. The adaptive techniques and APIs that Apple is recommending (again, see WWDC 2014 Session 216) are intended to enable apps to be designed just once and flexibly adapt to different displays.
It’s also important to consider the two possible (and not necessarily conflicting) reasons why larger displays are desirable:
#1 is about showing more content on screen (e.g. more text per page in an e-book). #2 is about making the content bigger (e.g. larger text in an e-book). #1 is about increasing the number of points, not (only) the number of pixels. #2 requires larger points (fewer points per inch).
As a baseline, let’s consider the existing iPhone 5 / 5C / 5S display:
|4.0||@2x||1136 × 640||568 × 320||326||163|
The easiest thing Apple could do to create 4.7 and 5.5 inch displays would be to use the current 1136 × 640 pixel resolution, but this leads to several undesirable results, even though developers would have to do nothing new to support it. Such a 4.7-inch display would have a resolution of only 277 pixels per inch, and a 5.5-inch display would come in at only 237 pixels per inch. Neither display would show any additional content compared to the iPhone 5, and though everything would be physically bigger, the lower pixel-per-inch resolution would make everything look jaggier, especially on the 5.5-inch model. That’s not going to happen. New iPhone displays need to look as good as or better than the existing ones.
Now, let’s consider an oft-cited prospective new iPhone resolution, 1704 × 960, about which 9to5Mac’s Mark Gurman reported in May:
Fast forward to 2014, and Apple is preparing to make another significant screen adjustment to the iPhone. Instead of retaining the current resolution, sources familiar with the testing of at least one next-generation iPhone model say that Apple plans to scale the next iPhone display with a pixel-tripling (3X) mode. […]
568 tripled is 1704 and 320 tripled is 960, and sources indicate that Apple is testing a 1704 × 960 resolution display for the iPhone 6. Tripling the iPhone 5’s base resolution would mean that the iPhone 6’s screen will retain the same 16:9 aspect ratio as the iPhone 5, iPhone 5s, and iPhone 5c.
Simply tripling the base point size would make things relatively easy for developers — it’d be akin to the 2010 introduction of the first retina device, the iPhone 4. Developers would just need to add @3x graphic assets.3 The layout of the app, as specified in points, would remain unchanged.
I think Gurman was right that Apple was testing an @3x mode. I’m almost certain he was wrong that they ever tested 1704 × 960, unless they considered a new iPhone with a 4.0-inch display. That’s the only size at which 1704 × 960 makes any sense. Why? Because as measured in points, an @3x 1704 × 960 display would show no additional content compared to the iPhone 5. It’d still be 568 × 320 points. (One could comfortably reduce their Dynamic Type system-wide font size preference on such a display, but it wouldn’t increase the amount of content displayed by default. Dynamic Type alone is not a good enough solution to showing more content — the only good solution is increasing the number of points, not just the number of pixels.)
A 4.7-inch 1704 × 960 display would require a 416 pixels-per-inch display, and would have a scaling factor of 1.18 compared to all previous iPhones. It would display the same content, but everything would be 1.18 times larger. A 5.5-inch 1704 × 960 display would require a 356 pixels-per-inch display, and would display content at a comically large 1.38 scaling factor. (For comparison’s sake, the iPad Air has a scaling factor of 1.24 compared to the iPad Mini.)
The one thing that’s magic about 1704 × 960 is that it’s the one resolution that would keep the iPhone akin to the iPad — multiple physical sizes and retina factors, but with one universal dimension in terms of points. You know how iPad apps have the exact same 1024 × 768 layout on all iPads, just with different physical and retina scales? That’s what 1704 × 960 (with @3x retina scale) would do for the iPhone. I think the iPhone is fundamentally different from the iPad in this regard, however.
(As I said above, 1704 × 960 would work perfectly for a new 4.0-inch iPhone with @3x retina scaling. Layout and UI element sizes would remain unchanged from the iPhone 5 series, but everything would look 1.5 times sharper with a 489 pixels-per-inch display. By all accounts, however, there is no such device in the works. Apple seems to be leaving the 4.0-inch size behind.)
|Size||Retina||Pixels||Points||Area Factor||px/inch||pt/inch||Scale Factor|
|4.7||@2x||1334 × 750||667 × 375||1.38x||326||163||1.0x|
|5.5||@2x||1334 × 750||667 × 375||1.38x||278||139||1.17x|
At 4.7 inches, 1334 × 750 works perfectly as a new iPhone display, addressing problem #1, showing more content. With point dimensions of 667 × 375, this display would show 1.38 times more points than the iPhone 5. At 326 pixels-per-inch, everything on screen would remain exactly the same physical size. There would just be 38 percent more room for content.
This resolution is feasible for a 5.5-inch display, but doesn’t work well enough in my opinion, for the reasons italicized in the above table. 278 pixels-per-inch is unacceptably low — again, new iPhone displays need to look as good as or better than the previous models, and 278 pixels-per-inch is too low. Apple does this with the iPad Air and Mini (two sizes, same pixel count), but the iPad Air gets away with its sub-300 pixels-per-inch display because you tend to hold it further away from your eyes than you do a phone.
If Apple were to use this display resolution for both the 4.7- and 5.5-inch phones, the relationship between the two devices would only be a matter of scale. Everything on the 5.5-inch iPhone would appear 1.17 times larger than on the 4.7-inch one, but they would each show the same amount of content. I don’t think that’s a good enough reason to produce two new larger iPhone display sizes.
Gurman this week reported on a configuration file in the latest iOS 8 beta SDK, suggesting a display size — measured in points, not pixels — of 736 × 414. That, indeed, was interesting — but not so much at @2x, as Gurman posited.
|Size||Retina||Pixels||Points||Area Factor||px/inch||pt/inch||Scale Factor|
|4.7||@2x||1472 × 828||736 × 414||1.68x||359||180||0.91x|
|5.5||@2x||1472 × 828||736 × 414||1.68x||307||154||1.06x|
This resolution is a non-starter for a 4.7-inch phone on the basis of scaling. With 180 points per inch, UI elements and text would be rendered almost 10 percent smaller than on an iPhone 5. With a scaling factor of 0.91, I don’t think it would appear comically small, but no one expects or wants things to appear smaller on a 4.7-inch phone than they do on a 4.0-inch phone. Not going to happen.
At 5.5 inches, however, this resolution works. The only hiccup is that the display would be “only” 307 pixels-per-inch. That meets Apple’s original 2010 definition of “retina display” as 300+ PPI, but just barely. Sticking with @2x retina scale would be less work for UI designers, and such a display would cost less and be less graphically taxing than what I’m about to propose, but a new iPhone with a lower resolution (in terms of pixels-per-inch) display is a bitter pill to ask people to swallow. Again, I think the display on a new top-tier iPhone must be as good or better than the previous model in every way. 307 pixels-per-inch doesn’t quite cut it, I think. (If I’m wrong about anything in this piece, however, this might be it — that 307 pixels-per-inch number is the only thing I see wrong about a 5.5-inch 1472 × 828 display.)
Take the same 736 × 414 point display size, and apply a retina scaling factor of @3x instead of @2x, and you get a very intriguing 5.5-inch iPhone:
|Size||Retina||Pixels||Points||Area Factor||px/inch||pt/inch||Scale Factor|
|4.7||@3x||2208 × 1242||736 × 414||1.68x||539||180||0.91x|
|5.5||@3x||2208 × 1242||736 × 414||1.68x||461||154||1.06x|
Nothing changes compared to 1472 × 828 but one thing: pixels-per-inch resolution. The 4.7-inch size is still out because the scaling factor would render everything almost 10 percent smaller.
Everything works at these dimensions for a 5.5-inch display. With an increase in area of 68 percent and a scaling factor of 1.06, this display would address both reasons why someone might want a very large iPhone: it would show a lot more content, and it would render everything on screen, point-for-point, a little bit bigger. And at 461 pixels-per-inch, everything would be amazingly sharp. At that point it would be difficult for most of us to perceive individual pixels from any viewing distance, not just from typical practical viewing distance. This would be so sweet, I’d wager Apple comes up with a new marketing name for it: super-retina or something.4
The only issue is whether it’s technically feasible for Apple: (a) to obtain sufficient supply of 461 PPI displays at a reasonable cost, and (b) to produce a GPU capable of pushing that many pixels. (b) Might not be a problem at all, considering that last year’s A7 SoC is already capable of driving the 2048 × 1536 retina iPad displays without breaking a sweat. As for (a), rumors abound that @3x is a real thing, and if that’s true, I think it’s for the 5.5-inch phone. In terms of the total number of pixels, the technical jump from the iPhone 5 series to this display would be quite comparable to that from the 3GS to the iPhone 4. With the 3GS to the 4, the number of pixels exactly quadrupled. Going from the 1136 × 640 iPhone 5 display to a new 2208 × 1242 display would increase the total number of pixels by a factor of about 3.8. After four years of @2x retina scale, I think it’s plausible that Apple could pull this off — especially since they pulled off the @1x to @2x jump after only three years.
The technical and manufacturing difficulties involved in such a leap could well explain the pervasive rumors that the new 5.5-inch trails the 4.7-inch model in terms of production. It also fits with pervasive rumors that it will cost at least $100 more.
1564 × 880 is feasible for a 5.5-inch phone. That’s what you get if you maintain the 326 pixels-per-inch density and @2x scale. This would increase area — the number of points displayed on screen — by a whopping 89 percent. But it wouldn’t increase the size of what you see at all. I think the sweet spot for a 5.5-inch phone would allow you to see more content and make what you see at least a little bit bigger. So that’s why I’d bet against 1564 × 880. (1564 × 880 would be implausible for the 4.7-inch phone: it would render UI elements and text 15 percent smaller than all previous iPhones.)
1920 × 1080 is a standard resolution for “high definition”, but the numbers only work for 4.7 inches at @3x retina scale:
|Size||Retina||Pixels||Points||Area Factor||px/inch||pt/inch||Scale Factor|
|4.7||@2x||1334 × 750||667 × 375||1.38x||326||163||1.0x|
|4.7||@3x||1920 × 1080||640 × 360||1.27x||469||156||1.04x|
|5.5||@3x||1920 × 1080||640 × 360||1.27x||401||134||1.22x|
None of those numbers jumps out at me as terrible (although the 1.22 scaling factor at 5.5 inches comes close), but none of them are particularly compelling either. At 4.7 inches, 1920 × 1080 would give you 27 percent more content (in points), and would increase the scale of points by 4 percent. Both of those are plausible, and a reasonable balance between showing more content and bigger content. But it seems like a worse tradeoff balance for 4.7 inches than my perceived sweet spot of 1334 × 750. I’ve repeated those numbers in the chart above for comparison. I think for the 4.7-inch phone, you want to focus on more content, not bigger content, and 1334 × 750 is a better — and cheaper, and more power efficient — option for that.
At 1920 × 1080, a 5.5-inch iPhone would skew too heavily toward showing bigger content than more content. People who really want bigger text would be served just as well, I think, with a 2208 × 1242 @3x display and changing the Dynamic Type font size. At 1920 × 1080, everyone would get very large type by default. Even worse, a 1920 × 1080 5.5-inch phone at @3x would have fewer points than a 1334 × 750 4.7-inch iPhone at @2x (640 × 360 vs. 667 × 375, respectively). I don’t think it’s feasible for the physically larger phone to show less on-screen content. This is why it’s essential to consider points, not just pixels.
Lastly, I considered 2272 × 1280 at @4x retina scale. Jumping from @2x to @4x would make things much easier for iOS UI designers (see footnote 3 below), but Apple’s top priority isn’t making life easier for UI designers. 2272 × 1280 is exactly double the current 1136 × 640 iPhone 5 display. The problem with that is that it doesn’t change the screen size in terms of points at all. You’d see exactly the same amount of content, only bigger (on the 4.7-inch phone) or much bigger (on the 5.5). Everything would look incredibly sharp as well, but I think showing more content is an essential aspect of the demand for large-screen phones.
The three main factors to consider for a new iPhone display size:
A 4.7-inch iPhone at 1334 × 750 pixels would change only one of those: content area. The points-per-inch and pixels-per-inch would remain unchanged from the iPhone 5 series, but the display would expand from 568 × 320 points to 667 × 375, an increase of 38 percent. This is such an Apple-like thing to do — keeping the exact same 326 pixels-per-inch density and simply putting more pixels in both dimensions to make a larger display — that I’m surprised I can only find one mention of it: an April report from KGI Securities analyst Ming Chi-Kuo. (In that same note, Chi-Kuo pegged the 5.5-inch iPhone at 1920 × 1080, which, for the reasons outlined above, I very much doubt.)
A 5.5-inch iPhone at 2208 × 1242 pixels (and @3x retina scaling) would improve all three factors:
Remember back in 2007, when Apple was seemingly concerned that the 3.5-inch iPhone was too big? How times change. ↩
To be precise, when I say “16:9”, I mean “close enough to 16:9 for practical purposes”. 2208 × 1242 is in fact precisely 16:9: divide the height by the width and you get 1.777. 1334 × 750, however, is not precisely 16:9: divide and you get 1.7786. That doesn’t matter. It’s more than close enough to 16:9 for practical purposes. Note too that the existing iPhone 5 series 1136 × 640 display isn’t precisely 16:9 either. 1136 × 639 would be precisely 16:9, but an odd number of pixels in one dimension would be stupid. ↩
Moving to @3x from @2x is actually quite a bit trickier than moving from @1x to @2x was, because the math is no longer tidy. When you double, everything remains an even number. @3x leads to odd numbers. For example, what do you do with a 3-pixel wide stroke from your @2x interface? You can’t render it at 4.5 pixels, so you have choose between rendering it at 4 or 5 pixels, making it slightly thinner or thicker. Or you could anti-alias it to approximate a 4.5-pixel stroke width, in which case you sacrifice sharpness and precision. We’re talking about hundredths of an inch, but trust me, iOS designers care about this. @3x is going to be a bit of a pain in the ass for pixel-obsessive designers. ↩
This in turn will drive the Android die-hards nutty, on the grounds that there have been Android handsets with 450+ PPI displays since last year, but Apple will play it on stage and in TV commercials as though they’re breaking new ground. ↩
Brian S. Hall poked me on Twitter yesterday:
“This style of communication is like reading a foreign language to me. I don’t understand what most of it means.” @gruber re Apple IBM memo.
I.e. that my criticism of the opaque business-jargon-laden style of Satya Nadella’s company-wide memo regarding Microsoft’s layoffs could apply just as aptly to Apple’s press release announcing their IBM partnership, which I didn’t criticize.
Hall has a point. That Apple press release is rather jargon-laden and opaque. I don’t know why that is, but my guess is that as a joint initiative, the press release was written jointly by Apple and IBM. Most press releases from Apple, though formal, are better written. (Some are even poignant in their relative brevity.)
But, to compare a press release to a company-wide memo is a bit of an apple-to-oranges situation. A company-wide memo is not a press release, and Tim Cook sent a company-wide memo regarding the IBM deal, too. Unlike Microsoft, Apple doesn’t (yet?) post such memos for public consumption, but as usual, 9to5Mac has a copy (adorned with artwork from Darth).
You don’t have to be an industry insider to know that Microsoft and Apple have very different company cultures. One could argue that Nadella’s style is what Microsoft employees expect, and that my personal sensibilities more closely align with Apple’s culture. But I don’t buy it. I think clear writing is the result of clear thinking. Cook’s memo isn’t casual or informal; it simply isn’t dressed up with extraneous formalities and corporate-culture bromides. Nadella’s raises as many questions as it answers. ★
“Only Apple” has been Tim Cook’s closing mantra for the last few Apple keynotes. Here’s what he said at the end of last week’s WWDC keynote:
You’ve seen how our operating systems, devices, and services, all work together in harmony. Together they provide an integrated and continuous experience across all of our products, and you’ve seen how developers can extend their experience further than they’ve ever done before and how they can create powerful apps even faster and more easily than they’ve ever been able to.
Apple engineers platforms, devices, and services together. We do this so that we can create a seamless experience for our users that is unparalleled in the industry. This is something only Apple can do. You’ve seen a few people on stage this morning, but there are thousands of people that made today possible.
Is this true, though? Is Apple the only company that can do this? I think it’s inarguable that they’re the only company that is doing it, but Cook is saying they’re the only company that can.
I’ve been thinking about this for two weeks. Who else is even a maybe? I’d say it’s a short list: Microsoft, Google, Amazon, and Samsung. And I’d divide that short list into halves — the close maybes (Microsoft and Google) and the not-so-close maybes (Amazon and Samsung).
Samsung makes and sells a ton of devices, but they don’t control any developer platform to speak of. They’re trying with Tizen, but that hasn’t taken off yet. So their phones and tablets run Android, their notebooks run Windows or Chrome OS, and there’s no integration layer connecting all the other stuff they make (TVs, refrigerators, whatever). I think Tizen exists because Samsung sees the competitive disadvantage they’re in by not controlling their software platforms, but they’re nowhere close to having something that helps them in this regard.
Amazon sells devices (including soon, purportedly, phones) and certainly understands cloud services and the integration of features under your Amazon identity. But their aims, thus far, are narrow. Amazon devices really are just about media consumption — books, movies, TV shows — and shopping from Amazon. They don’t make PCs, so compared to Apple and the growing integration between Macs and iOS devices, Amazon isn’t even in the game. And with their reliance on Android (forked version or not), they don’t have anywhere near the control over their software platforms that Apple does.
Google has all three: platforms, devices, and services. But the devices that are running their platforms are largely outside their control. They sell “pure Google” Nexus devices, but those devices haven’t made much of a dent in the market. Google’s mindset a decade ago was centered around web apps running in browsers. Google didn’t need its own platform because every PC had a browser and people would use those browsers to do everything Google provided in browser tabs. That meta-platform approach has limits, though, particularly when it comes to post-PC devices. Their stated reason for buying Android wasn’t because they wanted to design and control the post-PC device experience, but because they wanted an open mobile platform on which their web services could not be locked out.
Google’s aspirations for seamlessness largely, if not entirely, revolve around Google’s own apps and services. They’ve long offered tab sharing between Chrome on multiple devices — a cool feature, much in line with the Continuity features Apple debuted at WWDC. But if Google did something similar for email, it would only work with Gmail. Gmail on your phone to Gmail in a tab in Chrome on your PC. (On the other hand, Google’s solution would likely work from Gmail on your iPhone too; Apple (Beats excepted) offers bupkis for Android users.)
That leaves Microsoft. Here’s a tweet I wrote during the keynote, 20 minutes before Cook’s wrap-up:
Microsoft: one OS for all devices.
Apple: one continuous experience across all devices.
That tweet was massively popular,1 but I missed a word: across all Apple devices. Microsoft and Google are the ones who are more similarly focused. Microsoft wants you to run Windows on all your devices, from phones to tablets to PCs. Google wants you signed into Google services on all your devices, from phones to tablets to PCs.
Apple wants you to buy iPhones, iPads, and Macs. And if you don’t, you’re out in the cold.2
Apple, Google, and Microsoft each offer all three things: devices, services, and platforms. But each has a different starting point. With Apple it’s the device. With Microsoft it’s the platform. With Google it’s the services.
And thus all three companies can brag about things that only they can achieve. What Cook is arguing, and which I would say last week’s WWDC exemplified more so than at any point since the original iPhone in 2007, is that there are more advantages to Apple’s approach.
Or, better put, there are potentially more advantages to Apple’s approach, and Tim Cook seems maniacally focused on tapping into that potential.
Apple’s device-centric approach provides them with control. There’s a long-standing and perhaps everlasting belief in the computer industry that hardware is destined for commoditization. At their cores, Microsoft and Google were founded on that belief — and they succeeded handsomely. Microsoft’s Windows empire was built atop commodity PC hardware. Google’s search empire was built atop web browsers running on any and all computers. (Google also made a huge bet on commodity hardware for their incredible back-end infrastructure. Google’s infrastructure is both massive and massively redundant — thousands and thousands of cheap hardware servers running custom software designed such that failure of individual machines is completely expected.)
This is probably the central axiom of the Church of Market Share — if hardware is destined for commoditization, then the only thing that matters is maximizing the share of devices running your OS (Microsoft) or using your online services (Google).
The entirety of Apple’s post-NeXT reunification success has been in defiance of that belief — that commoditization is inevitable, but won’t necessarily consume the entire market. It started with the iMac, and the notion that the design of computer hardware mattered. It carried through to the iPod, which faced predictions of imminent decline in the face of commodity music players all the way until it was cannibalized by the iPhone.
Apple suffered when they could not operate at large scale. When you go your own way, you need a critical mass to maintain momentum, to stay ahead of the commodity horde. To pick just one example: CPUs. Prior to the Mac’s switch to Intel processors in 2006, Macs were generally more expensive and slower than the Windows PCs they were competing against. There weren’t enough Macs being sold to keep Motorola or IBM interested in keeping the PowerPC competitive, and Apple didn’t have the means to do it itself. Compare that to today, where Apple can design its own custom SoC CPUs — which perform better than the commodity chips used by their competitors. That’s because Apple sells hundreds of millions of iOS devices per year. Apple’s commitment to making its own hardware provided necessary distinction while the company was relatively small. Now that the company is huge, it still provides them with distinction, but now also an enormous competitive edge that cannot be copied. You can copy Apple’s strategy, but you can’t copy their scale.
Microsoft and Google have enormous market share, but neither has control over the devices on which their platforms run. Samsung and Amazon control their own devices, but neither controls their OS at a fundamental level.
Microsoft and Google can’t force OEMs to make better computers and devices, to stop junking them up with unwanted add-ons. Apple, on the other hand, can force anything it can achieve into devices. Apple wants to go 64-bit on ARM? Apple can do it alone.
Let’s take a step back and consider Apple’s operational prowess. In their most recent holiday quarter, they sold 51 million iPhones and 26 million iPads. In and of itself that’s an operational achievement. But further complicating the logistical complexity: the best selling devices (iPhone 5S and 5C, iPad Air and the iPad Mini with Retina Display) had only just been released that quarter. iOS device sales skew toward the high-end, not the low end, because they’re not commodities. Brand new devices sold in record numbers. The single best selling and most important device was the iPhone 5S, with an all-new fingerprint sensor and camera. A secure enclave for the fingerprint data. Brand-new Apple-designed A7 processors — the first in the industry to go 64-bit. No one else is making 64-bit mobile CPUs and Apple sold tens of millions of them immediately. There are very few standard parts in these devices. Consider too that Apple has no way of knowing in advance which devices — and which colors of those devices — will prove the most popular.
But the whole quarter went off, operationally, pretty much without a hitch. Record unit sale numbers with fewer product shortages and delays than ever before. No one’s perfect — remember the white iPhone 4, which was announced in June 2010 but didn’t go on sale until April 2011? — but Apple is very, very good, and has been throughout the entire post-NeXT era.
Everyone knows that Tim Cook deserves credit for this operational success. Manufacturing, procurement, shipping, distribution, high profit margins — these are things we’ve long known Tim Cook excels at managing.
As the Cook era as Apple’s CEO unfolds, what we’re seeing is something we didn’t know, and I think few expected. Something I never even considered:
Tim Cook is improving Apple’s internal operational efficiency.
It has long been axiomatic that Apple is not the sort of company that could walk and chew gum at the same time. In 2007, they issued a (very Steve Jobs-sounding) press release that stated Mac OS X Leopard would be delayed five months because the iPhone consumed too many resources:
However, iPhone contains the most sophisticated software ever shipped on a mobile device, and finishing it on time has not come without a price — we had to borrow some key software engineering and QA resources from our Mac OS X team, and as a result we will not be able to release Leopard at our Worldwide Developers Conference in early June as planned.
In response, Daniel Jalkut wrote:
The best we can hope for is that it is only sleazy marketing bullshit. Because if what Apple’s telling us is true, then they’ve confessed something tragic: they’re incapable of building more than one amazing product at a time. The iPhone looks like it will be an amazing product, but if Apple can’t keep an OS team focused and operational at the same time as they keep a cell phone team hacking away, then the company is destined for extremely rough waters as it attempts to expand the scope of its product line.
Or consider the October 2010 “Back to the Mac” event, the entire point of which was to announce features and apps for the Mac that had started life on iOS years earlier.
That seems like ancient history, given the magnitude of the updates shown last week in both OS X Yosemite and iOS 8. All the things that make sense for both OS X and iOS are appearing together, this year, on both platforms. Everything from user-facing features like Extensions and Continuity to Swift, the new programming language. This requires more engineers working together across the company.
The same maestro who was able to coordinate the procurement, assembly, production, and shipment of 76 million all-new iPhones and iPads in one quarter has brought those operational instincts and unquenchable thirst for efficiency to coordinating a Cupertino that can produce major new releases of both iOS and OS X, with new features requiring cooperation and openness, in one year. They’re doing more not by changing their thousand-no’s-for-every-yes ratio, but by upping their capacity.
The turning point is clear. The headline of Apple’s October 2012 press release said it all: “Apple Announces Changes to Increase Collaboration Across Hardware, Software and Services”. It turns out that was not an empty bromide, meant to patch over run-of-the-mill corporate political conflict. Tim Cook wanted Apple to function internally in a way that was anathema to Scott Forstall’s leadership style. The old way involved fiefdoms, and Forstall’s fiefdom was iOS. The operational efficiency Cook wanted — and now seems to have achieved — wasn’t possible without large scale company-wide collaboration, and collaboration wasn’t possible with a fiefdom style of organization.
That also happens to be the same press release in which Apple announced the ouster of retail chief John Browett, whose ill-fated stint at the company lasted just a few short months. Browett is a footnote in Apple history, but I think an important one. Apple hired him from Dixon’s, a U.K. electronics retailer akin to Best Buy here in the U.S. In short, a nickel-and-dime operation where the customer experience is not the top priority. Browett thus struck many as a curious choice for the head of Apple retail.
Browett’s hiring and the resulting failure of his tenure at Apple raised a legitimate fear: that this was a sign of things to come. This — penny-pinching and prioritizing the bottom line, losing sight of excellence in the eyes of the customer as the primary purpose of the Apple Stores — this, is what happens when the “operations guy” takes over the helm.
Ends up, we should have no such worries. My guess is that it’s as simple as Cook having thought that there were operational improvements to be had in retail, and so he hired an operationally minded retail executive. He didn’t understand then what Angela Ahrendts’s hiring shows that he clearly does understand now: that Apple’s retail stores need to be treated much like Apple’s products themselves, and thus require the same style of leadership.
During the keynote last week, John Siracusa referenced The Godfather, quipping:
Today Tim settles all family business.
I’d say it’s more that Cook settled the family business back in October 2012. Last week’s keynote was when we, on the outside, finally saw the results. Apple today is firing on all cylinders. That’s a cliché but an apt one. Cook saw untapped potential in a company hampered by silos.
When Cook succeeded Jobs, the question we all asked was more or less binary: Would Apple decline without Steve Jobs? What seems to have gone largely unconsidered is whether Apple would thrive with Cook at the helm, achieving things the company wasn’t able to do under the leadership of the autocratic and mercurial Jobs.3
Jobs was a great CEO for leading Apple to become big. But Cook is a great CEO for leading Apple now that it is big, to allow the company to take advantage of its size and success. Matt Drance said it, and so will I: What we saw last week at WWDC 2014 would not have happened under Steve Jobs.
This is not to say Apple is better off without Steve Jobs. But I do think it’s becoming clear that the company, today, might be better off with Tim Cook as CEO. If Jobs were still with us, his ideal role today might be that of an éminence grise, muse and partner to Jony Ive in the design of new products, and of course public presenter extraordinaire. Chairman of the board, with Cook as CEO, running the company much as he actually is today.
This is what only Apple can do:
Software updates that are free of charge and so easily installed that the majority of iOS and Mac users are running the latest versions of the OSes (a supermajority in the case of iOS). Apple can release new features and expect most users to have them within a year — and third-party developers can count on the same thing.
Hardware that is designed hand-in-hand with the software, giving us things like the iPhone 5S fingerprint scanner and the secure enclave, which requires support from both the operating system and the SoC at the lowest levels. And now Metal — custom graphics APIs designed specifically and solely for Apple’s own GPUs. A custom graphic API to replace an industry standard like OpenGL would have been a hard sell for Apple a decade ago, because the Mac market was so relatively small. Microsoft could do it (with DirectX) because of the size of the Windows gaming market. Now, with iOS, Apple already has the makers of four popular gaming engines on board with Metal.
Tim Cook has stated publicly that new products are in the pipeline, and he seems confident regarding them (as do other Apple executives). We can’t judge them yet, but consider this: Recall again that in 2007 Apple was forced to admit publicly that they had to pull engineering, design, and QA resources from the Mac in order to ship the iPhone. This year, new products are coming and but iOS and Mac development not only did not halt or slow, it sped up. In recent years, the company grew from being bad at walking and chewing gum to being OK at it, and most of us thought, “Finally”. But that wasn’t the end of the progression. Apple has proceeded from being OK at walking and chewing gum to being good at it. Thus the collective reaction to last week’s keynote: “Whoa.”
And the whole combination — hardware, software, services — is gearing up in a way that seems to be just waiting for additional products to join them. The iPhone in 2007 was connected to the Mac only through iTunes and a USB cable. Part of what made the iPhone a surprise in 2007 is that Apple clearly was in no position to add a new platform that harmonized seamlessly with Mac OS X. Today, they are.
Last week generated much talk of this being a “New Apple”. Something tangible has changed, but I don’t see it in terms of old/new. As Eddy Cue told Walt Mossberg two weeks ago, there was a transition, not a reset.
There is an Old Apple and a New Apple, but the division between them — the one actual reset — was 1997, with the reunification with NeXT. Old Apple was everything prior. New Apple is everything since.
New Apple didn’t need a reset. New Apple needed to grow up. To stop behaving like an insular underdog on the margins and start acting like the industry leader and cultural force it so clearly has become.
Apple has never been more successful, powerful, or influential than it is today. They’ve thus never been in a better position to succumb to their worst instincts and act imperiously and capriciously.
Instead, they’ve begun to act more magnanimously. They’ve given third-party developers more of what we have been asking for than ever before, including things we never thought they’d do. Panic’s Cabel Sasser tweeted:
My 2¢: for the past few years it’s felt like Apple’s only goal was to put us in our place. Now it feels like they might want to be friends.
It’s downright thrilling that this is coming from Apple in a position of strength, not weakness. I’m impressed not just by what Apple can do, but by what it wants to do. ★