Normalized Scoring

December 16, 2017 • 4 min read

One issue that I see with advanced analytics in basketball is that they aren’t great for cross-era comparisons. For example, points per possession and offensive efficiency (points per 100 possessions) both adjust for pace of play by recognizing that more possessions naturally lead to more points. However, a possession-based statistic doesn’t recognize the fact that, across eras, the expected value of a possession changes. In the 2002-2003 regular season, the Dallas Mavericks were the highest rated team in terms of offensive efficiency with 96.4 points per 100 possessions. In the 2016-2017 regular season, the Philadelphia 76ers were the worst rated team in terms of offensive efficiency with 100.7 points per 100 possessions. The inflation of the expected value of a possession is likely due to changes in NBA rules that favor offense, as well as the increased use of the 3-point shot.

Traditional statistics (e.g. points per game) and advanced analytics aren’t a good starting point for cross-era comparisons. But here’s an idea of a simple statistic for individual scoring that does hold up over time. You can think of it as an alternative to points per game (PPG) or offensive efficiency for an individual player. I call it normalized scoring. Normalized scoring describes what percentage of the team’s points that a particular player scored. If you add up the normalized scoring for all of the players on one team, you get 100%.

For example, in the 2017 NBA finals Kevin Durant averaged 35.2 PPG. The entire Warriors team averaged 121.6 PPG. Thus, Durant’s normalized scoring for the 2017 NBA finals is 28.9%, i.e. he scored 28.9% of the team’s points.

You can start to see how this is a useful cross-era comparison. Because it doesn’t matter what rules change or what styles of basketball get more or less popular, if someone in 1995 has higher normalized scoring than someone in 2017, then that’s a good indicator that they were a “better scorer”.

I know that this statistic doesn’t capture a lot of the nuance of basketball, and probably is not particularly useful for scouting and player evaluation. which are typical uses for advanced analytics. But I see normalized scoring as a replacement for cross-era comparisons of PPG. For example, if you want to compare the scoring ability of Michael Jordan in his NBA finals against Kevin Durant, then normalized scoring is a really simple, but powerful number that guards against modern biases that I described earlier. For me, I came up with this statistic when I was doing the most common of comparisons and thinking about who is the NBA GOAT. How did Michael Jordan’s NBA Finals performances compare to LeBron James or Kevin Durant? In 2017, Durant averaged 35.2 PPG. In 1998, Jordan averaged 33.5 PPG.

The benefit of normalized scoring lies in its simplicity. All the number does is acknowledge that “points” in and of themselves don’t matter. If Adam Silver doubles the value of every shot/basket in basketball tomorrow, that doesn’t mean that every player is now twice as good at scoring. You don’t win basketball with absolute numbers, you win with relative numbers. So, by making a relative PPG metric, you do a much better job of isolating how much true “scoring” any player did. Because it’s simple, though, it doesn’t bring a lot of the baggage that other advanced analytics bring (WAR, PER). Normalized scoring simply makes explicit what all fans calculate implicitly. If someone averages 10 PPG in pickup basketball to 21, of course that is different than 10 PPG in an NBA game. Normalized scoring is just the formalization of that simple intuition.

Below, are normalized scoring and PPG for all NBA Finals MVPs from 1990-2017. I compiled this list to show how normalized scoring is more informative than PPG.

YearPlayerNormalized ScoringPoints Per Game
1990Isiah Thomas (Detroit Pistons )25.8%27.6
1991Michael Jordan (Chicago Bulls)30.8%31.2
1992Michael Jordan (Chicago Bulls)34.5%35.8
1993Michael Jordan (Chicago Bulls)38.4%41.0
1994Hakeem Olajuwon (Chicago Bulls)31.2%26.9
1995Hakeem Olajuwon (Chicago Bulls)28.7%32.8
1996Michael Jordan (Chicago Bulls)29.4%27.3
1997Michael Jordan (Chicago Bulls)36.8%32.3
1998Michael Jordan (Chicago Bulls)38.1%33.5
1999Tim Duncan (San Antonio Spurs )32.3%27.4
2000Shaquille O'Neal (Los Angeles Lakers)36.2%38.0
2001Shaquille O'Neal (Los Angeles Lakers)32.8%33.0
2002Shaquille O'Neal (Los Angeles Lakers)34.2%36.3
2003Tim Duncan (San Antonio Spurs )27.5%24.2
2004Chauncey Billups (Detroit Pistons )23.6%21.0
2005Tim Duncan (San Antonio Spurs )24.2%20.6
2006Dwyane Wade (Miami Heat )37.3%34.7
2007Tony Parker (San Antonio Spurs )28.3%24.5
2008Paul Pierce (Boston Celtics )21.4%21.8
2009Kobe Bryant (Los Angeles Lakers)32.2%32.4
2010Kobe Bryant (Los Angeles Lakers)31.5%28.6
2011Dirk Nowitzki (Dallas Mavericks)27.5%26.0
2012LeBron James (Miami Heat )28.0%28.6
2013LeBron James (Miami Heat )26.1%25.3
2014Kawhi Leonard (San Antonio Spurs )16.9%17.8
2015Andre Iguodala (Golden State Warriors )16.2%16.3
2016LeBron James (Cleveland Cavaliers)29.6%29.7
2017Kevin Durant (Golden State Warriors )28.9%35.2

Link to download dataset

Making Measurements

November 1, 2017 • 8 min read

As I write this, the iPhone X is about to launch, and I’m surprised by how many phone reviewers (I’ve found 14) seem to conflate screen size to the diagonal length of the display.1

  • Jim Dalrymple of The Loop:

    However, when you turn it on, the iPhone X is all screen. It doesn’t have the big top and bottom of the iPhone Plus models—it’s just screen. It’s beautiful.

    The iPhone 8 has a 4.7-inch screen; the iPhone 8 Plus a 5.5-inch screen and the iPhone X a 5.8-inch screen.

  • Kif Leswing of Business Insider in an article titled “The iPhone X is smaller than the iPhone 8 Plus, but it has the largest iPhone screen Apple has ever made”:

    The iPhone X actually features Apple’s biggest phone display, measuring 5.8 inches on the diagonal, which is even bigger than the display on Apple’s largest iPhone, the iPhone 8 Plus, which measures 5.5 inches.

  • Jefferson Graham of USA Today in a Q&A article:

    [Q:] Gary Moskowitz: “What’s the size of the iPhone X?” [A:] That would be 5.8 inches, the largest screen size ever for an iPhone. The iPhone 8 Plus is 5.5 inches.

  • Tony Merevick of Thrillist in an article titled “The iPhone X Packs a Bigger Screen than the iPhone 8 Plus on a Smaller Body”:

    Out of all the big new features Apple packed into its game-changing new iPhone X, the phone’s stunning edge-to-edge OLED display is easily the biggest. In fact, it’s the biggest screen the company has ever packed into an iPhone in its 10-year history. And yet, the iPhone X itself is actually smaller than the iPhone 8 Plus and iPhone 7 Plus in terms of height and width.

  • Scott Stein of CNET:

    The 5.8-inch screen is the biggest on an iPhone to date…

  • David Gewirtz of ZDNet:

    She [my wife] found the physical size of the Plus phone to be too large. That’s one of the more interesting elements of the iPhone X – it has more screen, in a smaller package.

    It [iPhone X] is getting you more screen size in less space than the 8 Plus.

  • Brian X. Chen of The New York Times:

    First, the basics: The iPhone X has a 5.8-inch screen that is larger than the 5.5-inch display on the iPhone 8 Plus and the 4.7-inch screen on the iPhone 8.

  • Steve Kovach of Business Insider:

    The best part is the screen. At 5.8 inches, it’s slightly larger than the iPhone 8 Plus screen, but the iPhone X’s body is only a little larger than the iPhone 8.

  • Todd Haselton of CNBC:

    It’s [The iPhone X is] also easier to hold than the larger iPhone 8 Plus but offers a larger screen (5.8 inches versus 5.5 inches) since the display runs from edge to edge and top to bottom.

  • Julian Chokkattu of Digital Trends:

    What we like most about the iPhone X is its size. It feels compact — it’s slightly larger than the 4.7-inch iPhone 8, but it has a bigger screen than the 5,5-inch iPhone 8 Plus. The X is comfortable in the hand, and it feels remarkable to have so much more screen real estate than a cumbersome “plus-sized” phone.

  • Stuart Miles of Pocket-lint:

    Sporting a 5.8-inch screen, the display real-estate is bigger than the iPhone 8 Plus, but the chassis is considerably smaller, thanks to the shift in the display aspect.

  • Ed Baig of USA Today:

    The display, the largest screen ever on an iPhone despite the fact that the overall size of the X is only marginally taller than the iPhone 8, is beautiful.

  • Gareth Beavis of TechRadar:

    If you’ve used any of the iPhone Plus range, you’ll get on instantly with this handset. It’s [iPhone X has] got a bigger screen than any other iPhone, and yet it’s smaller than the iPhone 7 Plus.

  • Alex Cranz of Gizmodo:

    In fact, its [iPhone X’s] display is actually larger than the 5.5-inch screen on the physically larger iPhone 8 Plus.


Here’s the crux of the issue. The iPhone X has a 19.5:9 aspect ratio. This means the screen is approximately twice as tall as it is wide. Previous iPhones have had 16:9 (iPhone 5 and later) or 3:2 aspect ratios (iPhone 4s and earlier).

Of course, no one is actually interested in the diagonal of a screen (it’s the area of the screen that matters), but if two screens have the same aspect ratio, then screen diagonal is a fine proxy.

However, if two screens have different aspect ratios, then the diagonal length of the display is misleading. Here’s an example to illustrate that, where I’ve drawn two shapes. On the left is a square with a diagonal of 1 unit. On the right is a rectangle with a diagonal of 1.5 units. The rectangle has a 50% longer diagonal. But by screen area, it’s 23% smaller than the square.

dimensions.pngScreen size != screen area.

I can continue elongating the diagonal while maintaining a constant screen area by making the rectangle skinnier. There is no limit to how much longer I can make the diagonal, while keeping screen area constant.

Not all tech reviewers have overlooked this fact. Phone Arena did the math, and the screen area of the 5.8 inch iPhone X is 2.6% smaller than the 5.5 inch iPhone Plus. Vlad Savov of The Verge has also been on top of this.

Ideally, as phones move to different shapes (with cutouts, notches, and curved edges), we can talk about screen area instead of diagonal length.2 Even screen area isn’t a perfect proxy of the screen size, though. Because at the end of the day, a bigger screen size is only useful if it lets you do or see more. For some use cases, like watching a movie, this might mean that an 19:9 ratio with smaller screen area than 3:2 ratio is actually better. Or consider on laptops, where vertical space is often at a premium, so 3:2 ratio is preferred to 16:9 ratio, all else being equal.

This is all to say that screen size cannot be summed up by one metric (as is often the case, things are more grey than they are black and white). But if I had to pick one metric, I certainly wouldn’t choose screen diagonal.3 Wouldn’t it be simpler if we just quoted screen area?

  1. Some reviewers, though, have gotten it right.

    Mark Spoonauer of Tom’s Guide:

    It’s worth noting that this 5.8-inch screen gives you less viewing area than the 5.5-inch iPhone 8 Plus, because the iPhone X’s screen has a narrower aspect ratio.

    Neil Cybart of Above Avalon:

    iPhone X has a little bit less screen real estate (in terms of area) than iPhone Plus. The 5.8-inch screen has a more vertical element than its iPhone Plus sibling.

  2. In fact, the “diagonal” reported for phones like the iPhone X and Galaxy Note 8 are not even their actual diagonal. Even though these screens have rounded corners, when measuring the diagonal, the companies are measuring as if the screen were an actual rectangle.

    From Apple’s iPhone X tech specs:

    The iPhone X display has rounded corners that follow a beautiful curved design, and these corners are within a standard rectangle. When measured as a standard rectangular shape, the screen is 5.85 inches diagonally (actual viewable area is less).

    What, then, are we actually measuring? How much can a corner be rounded off before we stop using it’s “rectangular” diagonal? What about measuring the rectangular diagonal of a circular display, because we can think of a circle as actually transcribed in a square, so that it actually has (to quote Apple) “corners within a standard rectangle.” This practice seem disingenuous, and another reason not to measure diagonal length of screens. 

  3. I can’t vouch for the validity of this fact, but I recall reading that television manufacturers were a big proponent of diagonal measuring, especially as TVs moved from 4:3 aspect ratio to 16:9 aspect ratio. A flat-screen LCD at 16:9 with the same screen area as a CRT display at 4:3 could suddenly be marketed as significantly “larger.” 

Sugarcoating

June 12, 2017 • 2 min read

Our understanding of the world is often not veridical. How the world is framed often becomes reality—which is why I’ve been thinking about two words recently.

The first is “feature phone,” which I put in quotes because feature phones are phones without features. They don’t do what an iPhone does. If this word was coined by “feature phone” manufacturers trying to make their phones seem better than they actually are, in this respect I don’t think they were successful. People use phones too much. I doubt anybody has walked into a store looking for an iPhone and come out with a dumb phone that was billed as a “feature phone.”

The other word that I’ve been thinking about is “defined contribution.” Like feature phone, both of these words are sugarcoating the most important part of the word. Feature phone, the functionality of your smartphone. Defined contribution, the money you receive in retirement. In a defined contribution plan, like a 401(k) plan, you put away money to buy stocks, bonds, and other investments, and then can have access to them in your retirement, but you can put away as little money as you want. Your contribution isn’t strictly defined, and, most importantly, your benefits are not defined. In a defined benefit plan (i.e. pension), your benefits are set by multiple factors (e.g. length of employment, ending salary, etc.). in a defined contribution plan, your benefits are variable. Depending on when and how much you contribute in addition to your asset allocation, you might have large or small nest egg at retirement. This is not to say that a defined contribution plan is inherently better or worse, but the name is hiding it’s defining feature: that it’s not a pension plan, it’s value is variable.

So let’s call the shots as they are. A “feature phone” is a dumbphone. A “defined contribution” plan is a variable benefit plan. 👌

Thoughts about Bike-sharing

May 23, 2017 • 7 min read

As I was coming out of an exam a couple of days ago, I saw something I’ve seen a couple times before.

bike_truck.jpgThe truck that redistributes bike-sharing bicycles.

I’ve seen this bicycle redistribution truck around campus a few times since Princeton started its partnership in 2016 with Zagster to launch bike-sharing on campus. The program has 10 or so bike docking locations on campus that support a fleet of 50 bikes.

It’s just a little bit odd and funny that managing the bike-sharing program (a program that is meant to promote less driving around campus and more biking) requires loading bicycles into the back of a truck and redistributing the bikes to different stations. And according to the minutes from the March 27, 2016, meeting of Princeton’s student government, this truck goes around campus twice (twice!) a week to redistribute bikes.

Then, I was interested in doing a little more research into the effectiveness of the bike-sharing. While I’m not a user of the service, if I was, I would have one requirement: can I find a bike every time that I need one. That is, can I be confident that when I go to a particular station to get a bike that there will be one there.

For different users of the service, I think the minimum success rate is quite different. For a tourist visiting Princeton’s campus, if you want to rent a bike, but there are no available bikes to rent, then no big deal. You can walk around campus instead.

But if you’re a student who relies on the service to get to lectures and classes, then if there’s no bike available that is actually a huge issue. Presumably, the whole point of using the bike-sharing service is so that you can allot less travel time. So if you try to rent a bike 5 minutes before lecture but don’t find one, then you will be late for lecture. If this happens a couple times, then you’ll probably just lose trust in the service, and get yourself a personal bike.

And that’s my qualm about the service: if one of its goals is to reduce the need to have a personal bike on campus for students, then I’m not sure if it can do that job. Similar to ride-hailing services, a bike-sharing service needs a large amount of inventory and liquidity to be thought of as a viable option. Though the more I think about bike-sharing, the more I am convinced that it is not for replacing a bicycle for a daily user.

All of that being said, here’s an interesting exercise: what is the success rate required for someone to depend on a bike-sharing service instead of a personal bike. I think the answer to this question is almost identical to how one might answer the same question about using Uber vs. using a personal vehicle. For me, I wouldn’t be surprised if a 99% success rate is the threshold to clear. That would mean that only 1 out of every 100 times you went to pick up a bike you couldn’t get one. At this level of reliability, you begin to approach the reliability of using a personal bicycle (I reckon that 99% reliability for a personal bike is a fair estimate).

But to get to 99% reliability, I wouldn’t be surprised if the Princeton bike-sharing program would need two or three times as many bikes as it does today. Unlike Uber, which can use surge pricing as an on-demand adjustment to bring more drivers onto its network during busy times, bicycles cannot be “dynamically” added into the network as demand changes. Of the docking locations I see around campus, it’s not uncommon for them to have no bikes at particularly busy parts of the day.

Another factor that is likely preventing a large influx of more bikes: the cost. According to an exploratory report on bike-sharing by the Los Angeles County Metro, the per bike installation cost for bike-sharing is $3000 to $5000! According to the press release, Princeton has 50 bikes deployed, which means an estimated cost of $150,000 to $250,000.

I guess everything above really brings me to my original point of writing this blog post, which is that this morning (5/22/17) after eating breakfast, I set out to visit 9 different stations and document how many bikes were at each location. Zagster clearly has information about inventory and usage orders of magnitude better than what this informal survey. And perhaps, the Zagster app might also have information about how many bikes each station has (but a cursory look at their app’s landing page suggests that it does not tell you which stations have available bikes). But I still just wanted to get out and see the stations with my own eyes.

Going into this, I had a suspicion that a few of the nine stations would have zero bikes. I admit, my bias is towards being skeptical of the effectiveness of the service. But to cut the suspense, of the stations I visited, only one of them (Firestone Library) had no bikes. However, to be fair, this morning was slightly drizzling, so I suspect that the usage rate of the bike-share is lower than normal. I think there are a couple more stations farther out of the main campus that I didn’t visit, but I was able to see basically all of the stations that are located on the main campus of Princeton.

Location Number of Bikes
Lakeside Apartments 9
Lawrence Apartments 9
Computer Science Building 4
Carl Icahn Laboratory 4
Princeton Station 3
Richardson Auditorium 2
Frist Campus Center 1
Firestone Library 0

Here’s pictures of all of them (in the order that I visited) and some commentary.

Richardson Auditorium: 2 bikes (10:30 AM)

richardson_auditorium.jpg

I see this station quite frequently on my way to the dining hall, and it’s been empty many times before. But today, there are 2 bikes here.

Princeton Station: 3 bikes (10:45 AM)

princeton_station.jpg

This is a trend that I see fairly frequently: locking a non-bike-share bike to the bike-share location. I see why this happens as some places around campus don’t have convenient locking posts, and even of those locations that do have bicycle posts, the bike-share ones are often of higher quality.

Forbes College: 2 bikes (10:50 AM)

forbes_college.jpg

Lawrence Apartments: 9 bikes (11:00 AM)

lawrence_apartments.jpg

This was surprising. Lawrence apartments house graduate students and are slightly off the main campus, but they do have quite a few bikes. I’m surprised that at 11:00 AM in the morning there were this many bikes still at the apartments. I would have guessed people would ride them into central campus in the morning.

Lakeside Apartments: 9 bikes (11:15 AM)

lakeside_apartments.jpg

Also graduate student housing. Also has a lot of bikes.

Carl Icahn Laboratory: 4 bikes (11:15 AM)

icahn_laboratory.jpg

This is where the picture of the truck redistributing bikes is from.

Frist Campus Center: 1 bike (11:45 AM)

frist_campus_center.jpg

This station is frequently empty.

Computer Science Building: 4 bikes (11:55 AM)

cs_building.jpg

Firestone Library: 0 bikes (12:00 PM)

firestone_library.jpg

Of course, it was the last station I saw of the day that I saw that had zero bikes.

Two Safari Quibbles

February 3, 2017 • 3 min read

As a student, the flexibility of a PC is indispensable. And judging by the technology used by my peers, this is true of near every student. However, it is still true that the majority of my computing (maybe ~60%) happens in the web browser.

On macOS, Safari and Google Chrome are the two powerhouse web browsers. Both have support for modern web standards, and both are very extensible. But Safari has two clear advantages: two-finger scrolling responsiveness and power efficiency. The best comparison I can make between two-finger scrolling in the two web browsers is scrolling in Android and iOS. Chrome feels like scrolling in Android: not bad, but not good. Safari feels like scrolling in iOS: fantastic. Once you tinker around in iOS, you realize how janky Android scrolling is (I use a Moto X Android phone). And there might not be a more important feature than power efficiency. Because of how heavily I use the web browser it is often the largest consumer of battery life.

Having listed Safari’s advantages over Chrome, there are still two quibbles I have with Safari that keep me using Chrome. And both have to do with the way tabs are displayed in Safari. Note that I have Increase Contrast selected in Accessibility.

safari-tabs.pngSafari tabs.

chrome-tabs.pngChrome tabs.

1. Lower Contrast Text

First, the text contrast of website titles in Safari is lower than in Chrome, which makes them harder to read. This quibble might be a function of using a non-retina MacBook Air, as I could see a sharper, more color accurate screen alleviating these issues. Even though Safari has lower contrast than Chrome, its text contrast is still above the 7:1 contrast ratio recommended by Apple.

Web Browser Contrast Ratio Text Color Background Color
Chrome (foreground tab) 19.1:1 rgb(0, 0, 0) rgb(243, 243, 243)
Safari (foreground tab) 14.4:1 rgb(0, 0, 0) rgb(214, 214, 214)
Chrome (background tab) 14.3:1 rgb(0, 0, 0) rgb(213, 213, 213)
Safari (background tab) 10.7:1 rgb(0, 0, 0) rgb(185, 185, 185)

2. No Favicons

The decreased legibility of Safari tab labels wouldn’t be such a large issue if not exacerbated by my second quibble: no favicons next to website titles.

Here’s my reasoning for why Safari does not show favicons. Safari tabs are implemented using native macOS tabs that can be found in TextEdit, Finder, etc. And in every other application, tabs are labeled only with text.

Even so, for me, favicons are the single most important identifier for different tabs. With favicons, I can glance at an icon instead of reading text to figure out which tab is which. Even better, as you navigate to different pages of a website, often the title will change, but the favicon does not. So the favicon offers a certain degree of reliability that text labels do not.

To my point, where appropriate, Apple features icons on many other labels around macOS.

mac-icons.pngIcons used in System Preferences, Finder, and the “Command-Tab” Application Switcher.

Even Safari uses the Touch Bar to display favicons, not text labels.

macbookpro-touch-bar-safari-favorites.pngImage from Apple.

Fingers crossed—🤞🤞—here’s to favicons and increased text contrast in Safari tabs.