Super Bowl Special: with Tim Kadlec and Henri Helvetica

Watch the session

Looking at ESPN.COM before the Super Bowl Weekend.

Want more info? Pls follow us on Twitter: Tim Kadlec , Henri Helvetica, and WebPageTest

Sign up for a FREE WebPageTest account and start profiling

Record on


Henri Helvetica​
Head of Community
Tim Kadlec​
Director of DevEx Engineering

Super Bowl Special: with Tim Kadlec and Henri Helvetica

Event Description

Looking at ESPN.COM before the Super Bowl Weekend.

Want more info? Pls follow us on Twitter: Tim Kadlec , Henri Helvetica, and WebPageTest

Sign up for a FREE WebPageTest account and start profiling


(02:03) Henri: All right,

(02:06) Tim: Mr. Henri. How are you doing, buddy?

(02:07) Henri: I'm very good, Mr. Mr. Kadlec

(02:10) Tim: Thanks Henri, it's too formal. That was a bad start. We're going to formally right at the get go.

(02:16) Henri: We should just start over. Yeah, that was very like, that was very matrix, Mr. Kadlec.

(02:23) Tim: It was, yeah, that's alright.

(02:26) Henri: How are you doing?

(02:26) Tim: Not too bad, not too bad.

(02:29) Henri: And, it seems like this is something we need to ask all the time. How is the weather in your part of the world?

(02:38) Tim: I mean, right now it's not too shabby. I mean, I think we get like a, it's actually warm. It's a like, well, warm, it's like 20 something degrees Fahrenheit, which for us is balming this time of years so I'll take it.

(02:51) Henri: Yes. 20 Fahrenheit. Yeah. Okay. I mean, you're good, a few degrees below, freezing, but so am I, the F in Fahrenheit is freezing here, that's really it. But whatever would you believe I've not left the house since Sunday? Not good news but, it's been a little cool. So I can't, I can't, yeah,

(03:23) Tim: Nobody likes to go,

(03:25) Henri: remote life,

(03:26) Tim: Cold, that's fair.

(03:30) Henri: Good afternoon. Mr. Calvano 47 degrees Fahrenheit in New Jersey, a heat wave. Well, I didn't know Florida extended that far north.

(03:44) Tim: Yeah. Jersey's balming apparently who knew?

(03:45) Henri: Yeah apparently, that's where we need to move to. What, how are things though? Just generally.

(03:54) Tim: Yeah, I mean, good. Yeah. I mean, it's been fun. hopefully everybody by now has seen a few of the things that have been going along with the, the webpage test UI that we talked about last time did a lot of little, incremental changes and stuff. We've been getting good feedback and there's been a lot of people pointing out like some really helpful little things that we've been adding and yeah. Been progressing pretty nicely, pretty happy with it all.

(04:16) Henri: No, that's great. and, for anyone out there listening, watching, when you take, webpage tests for a little test drive, let us know, if there's something that was, kind of like tucked away that you couldn't find any more or, wasn't too obvious, we've had a few people, come back and, ask us about certain things that, may have moved, but, we're, we're definitely there to, sort of like, point that out again. Sometimes it just takes actually a little spin in that scroll and it was actually really right there. So, like yesterday someone had asked about, comparative video, and, yeah, it was, it was actually cool to sort of let him know like some of the features that we had that were, again, not super, obvious, especially for, maybe new users, but that's what we're here for, for sure. Shout outs,, I think that's them.

(05:16) Tim: Yeah. They're the folks. Yeah. That's sharing the comparison videos on Twitter and stuff. Those are always fun to see. I mean, Scott, and I have talked a lot about like making the videos a lot easier to get to, so I think it's fair to say something like that'll be happening.

(05:28) Henri: But's such a powerful tool, and, I think they're very pleased to sort of show, the sort of like the differences that were, sometimes not so obvious, sort of look back individually.

(05:39) Tim: And I think the killer thing with the video, like the, video feature for the comparison is that it takes away the requirement for any sort of like perf knowledge, like when you're communicating stuff to, and this is where people have found this really impactful. And personally the same is like, if I'm communicating to stakeholders or to potential clients, or just to a broad audience, like I want to get across, like in the case of builder IO, like they want to get across that they are making, that the performance is good. So you can go through the whole, like here's the metrics and show it. And that's great for, for folks who get those metrics and understand what those metrics mean and stuff like that. But having visual, like side by side video comparison takes away all need for any sort of knowledge about what's going on behind the hood, because everybody can watch the videos and see the impact. And so it's a much more impactful way of sort of presenting those, improvements or degradations, or like, I remember Etsy, had monitors up, that showed like how Etsy performed around the globe. Like those kind of things. They're much more impactful when they're in that sort of visual form.

(06:45) Henri: Yep. No, absolutely. and, as you mentioned, you don't have to be, extremely well versed, and mega knowledgeable to see, like, the eye test, or as, what's his name as Terrell Owens would say, if it looks like a duck and it smells like a duck, it probably is a duck, or whatever, or quack like a duck, I don’t know,

(07:11) Tim: Quoting an NFL or  former retired now NFL player is probably a good segue Henri given what you told me you wanted me to dig into today.

(07:21) Henri: I mean, yeah, why not? So for anyone who, may not live on planet earth, this weekend is the very last weekend of, NFL. There'll be no more, professional football, well, no more NFL, I don't know what, the Euro league or I don't know they do, but, Bengal's great. Yeah, he was, and they kind of, they did him dirty in the last season. I'll get into that after, but, yeah, it'll be the last weekend of NFL. So it's one of the bigger, well, it's the biggest weekend of the year for them.

Something I think that people don't realize, the super bowl is actually pretty international, it's watched across the globe, and, people do set the time side, to watch the game. And obviously, a lot of 'em are probably hitting, the website for updates information about the game leading post, so I thought it might be interesting to look at some of the, the entities out there that are going to be covering, the NFL, super bowl. And yeah, I mean, I thought, that would be, a fitting sort of end to their season. Not that they've been waiting for this audit, commission a good deal. I doubt it. But, who knows, maybe he's one of the people watching right now under like the pseudonym,

(08:49) Tim: Right. Could be, yeah, no. And it be like, yeah. So I think you mentioned a couple sports sites, which we can dig into, I think, we'll just figure out what we want to do there and dive into one, but like, it's also probably, worth noting that, like, while we're not going to do it today, like to your point about the traffic and stuff, this is a huge time of year, too, for folks with marketing budgets. and so anybody who's going to be doing, any sort of substantial advertising during the super bowl, the same kind of general thought process probably applies for that. Like you've got to have that site buttoned up and prepared because if you're spending millions of dollars on, super bowl advertising, driving traffic to the site and then the site goes, it's not going to be pretty.

(09:31) Henri: And, and that's it, I love these little opportunities because they've become so commonplace part of our lives. But people may not realize that there's a lot of, heavy lifting that goes on behind, the curtains and people need to get ready for these, massive weekends of traffic surges, content, hopefully they've done, their, due diligence, some good QA and whatever we'll see.

(10:06) Tim: Well, so let's see, you sent me a couple, I think the one thing you said was what number, what's the number one?

(10:12) Henri: So, I did a quick, a little bit of research, apparently, the number one, and this is more or less north American wide. So apologies for those who are kind joining us from, anywhere outside of North America, but, is actually, the number one sports, destination online, right now. So I, thought it might be interesting to take a look at them, outside of the obviously, which, I used to sort of like poke around the sometimes and, the layout was so kind of like, this is kind of boring for a company with so much money, like, come on gentlemen. So yeah, here we are.

(10:57) Tim: All right. So yeah, I mean, so let's start then, I guess, with the ESPN one, although I, I like the irony by the way of, I don’t know, irony is the right word, but like we're doing this because the super bowl weekend's coming up, but it's also, I'm a big basketball ball. So it's NBA trade deadline today, which is why like everything here is NBA, but same thing. Yeah. Okay. So I did run a couple tests before just so that we didn't have to sort of wait for those tests to, complete. Which we can dive into, but let's see, I'll make my font size much bigger too. I fired up a couple because there were some immediate things that stuck out like right away, that I thought would be kind of fun. So first thing I did is kind of used these simple configs that we've got now, which I am kind of crushing on in a hard way.

Just because the easy config before, like anybody who's used, it was, it was the easy config got you to Virginia, like on a Moto G4. But that was kind of it. And it was, it was good. It was a good set of like defaults and stuff to get you going. But, it was one set of defaults. And what I'm liking about this is that it's like, I can't speak for everyone, but we hoped by having simple configs, it would drive people to test in different browsers, in different locations by having a few of those. And I can say for myself, like just the fact that it's staring at me makes me like, oh, okay. Yeah, I can run that too and see what happens. So that's what I did. I kicked off the, the mobile Virginia one, the desktop Virginia one, which tend to be two that I do a lot of anyway, but then I was like, oh, let's, let's fire off, Firefox and see what happens too. And see, so there's a bunch of things that kind of jumped out with those first three tests. I'm going to pull each of those up.

(12:49) Henri: Awesome and for those watching, who may not be very familiar with webpage tests, we're able to keep a test history, so this is something that you want to keep in mind sometimes you're like, oh man, what happened that one time I did that one audit you're able to go back and dig that up and resurface some of the information that, you thought was lost.

(13:12) Tim: Yeah. So if you've got a, like, if you're signed in, like, if you have an unread just sort of count, you get, I think 30 days, assuming you don't clear your local storage, but if you have an account, we keep that stuff around for 13 months. So it's a lot nicer.

(13:24) Henri: Awesome.

(13:25) Tim: Yeah. So, okay. So first off, this was the, we're going to kind of bounce between the first between them first, because I think the differences were what jumped out to me right away. And then we can kind of dive in maybe to one or two and see what we can do to improve things, but just on those first three tests. So this is your Moto G4. I'm also, by the way, loving that we've got the browser version, I think that's nice now, too. So this is the Moto G4, V98, which just came out a couple of days ago, on a 4G network for Virginia. and we can see kind of the highlights here in terms of those core web vitals, as well as some key metrics like speed index, first byte time, start render and things like that.  

So, start render and LCP, are quite a ways out on this test, which tells me there's, probably quite a bit we could do here to improve that, make that happen a little faster, pretty high CLS than total blocking time. There's I think a few things where probably, I haven't looked yet, but I'm hoping there's a few low hanging fruits. We can dive, pull out those.

(14:29) Henri: I mean, we have three red lights there, so there's got to some something that we could do.

(14:34) Tim: So there's that, this is the, cable desktop version for Chrome. Now you can see, as you would expect. And this comes up actually fairly often, but like, you're expecting metrics to be different between these because they're different scenarios. So we see a much faster time to first byte, much faster render LCP. CLS is lower. Total blocking time is significantly lower. That's one of the ones that's going to be impacted the most by the hardware. So that makes sense. An emulated G4 is going to be a lot worse.

This is where the difference started to jump out on me. Because I was like, oh, let's run Firefox. And I was like, at first I'm like, okay. So Firefox start render is a different time. And then I was like, all right, why is, there may be Firefox, it does have a different way of loading the page and stuff, which we can show a little bit in the waterfall. But then when I was looking at that, I realized there's also this big gap in first byte time. So this is Firefox on a cable network, so same network environment, but it's coming from Frankfurt. And the time to first byte is significantly different, we have about a, 0.5, three seconds versus 0.197 on the same network connection.  

So I was like at first I was like, well, maybe something's busted with our, did I do something wrong with the Firefox agent, which scared me. But then I reran the test from Chrome, arm the cable connection from Frankfurt. So just go with a different location. And this is the result of that. So the time to first byte is right in that same range. So this is, I think one of the values of like, when we talk about making sure that you're running tests from different locations now, first off caveat. Like, I don't know, I'm under the assumption that ESPN's market is pretty global. I would expect that Germany is going to be one of those. Like, there's plenty of people who are going to be looking at ESPN from Germany. That’s an assumption though, so maybe not, but like, I think it's a pretty safe assumption. And the time to first byte difference between these locations is dramatic over the same conditions. This is our Chrome before, again, less than, sub 200 milliseconds compared to, in that 5 to 600 millisecond range.  

That's why we test across the locations because this tells me whatever they've got set up for that CDN. Something in that chain is, not super-efficient when you start to hit it from here

(17:00) Henri: It's funny. I think I had some data on, where most of the traffic is coming from. Let me see ESPN,

(17:08) Tim: Look at you doing your homework, man.

(17:10) Henri: Man, come on, it's the investigative work. We have to find out what's going on, but, it's mostly, America, believe it or not. It looks like the next two countries are Canada and drum roll, please, Japan.

(17:27) Tim: Okay.

(17:28) Henri: And this is from Alexa, and, Alexa, by the way, which is, going to be no more in May I believe, which is unfortunate. But that was, some interesting, data, but yeah, the majority of it is from the US, at 85% apparently, then a few percentage points in Canada and again, the Japanese, percentage points, sort of surprised me.

(17:58) Tim: Okay.

(17:59) Henri: But that also means that we can do a test from Japan.

(18:03) Tim: I was going to say, that's the nice thing. Look at, thank you for doing the homework, by the way. You're showing me up pretty well that's good. No, that's like, that's one of the nice things too about having the multiple, like again, like having that good geographic distribution of locations is really, really important. And so we do have the agents in Japan, so I just fire a few of them. So I just fired one off from Tokyo and then I fired one off from Toronto.

(18:27) Henri: Yeah. Yeah.

(18:29) Tim: I thought you'd like that. So we could see, as those are running here in the background, and then again, those are on that cable connection, Chrome desktop, nothing in terms of, mobile emulation, for either one of those, just to keep them consistent in terms of what we're looking at. So our Toronto ones, 242, what was the US one? That's reasonable.

(18:56) Henri: I mean, it's not bad. I'm not mad at that.

(18:58) Tim: And then Japan is still running here, which is probably not a great sign, but the other one completed a lot faster.

(19:11) Henri: It's certainly not next door, you know? And, last I check sometimes, they're susceptible to sharks biting the, fiber lines. So, hopefully that's not happening. I don’t know, that's something that Medi shared once. In case he's watching, I read your tweets.

(19:33) Tim: I like the, it reminds me, there was a story of, from that. I heard from some folks at Google and I can't remember which, website or web application, I guess, that they were profiling where they ran into this, but they were working with one of the site. And this site was seeing this really weird blip in traffic where during a particular time of day, every day, like their site just got dramatically slower. And so, they went through kind of all the usual things. Like, is there anything weird about our system at this time? Maybe there's like the bandwidth or the network constraints, like change, maybe it’s like, is it an hour where people are traveling a lot, checking online or whatever it happens to be. They eventually, and I don't remember how they got to this. Eventually what they figured out was that this was a period where yeah, people were in their cars, the phone was getting set in the dashboard and the phone was overheating to the point where it started to slow down the entire device and this was happening universally for all these users.

And so it was actually related to the fact that like the device would overheat everything kind of gets slowed down. And then, yeah, it was just kind of one of those funky things. I like those little weird ones.

I mean, if we like, so 299, so on the surface, none of these look too bad, I'm going to pop like one of my favorite things when I'm doing this kind of stuff is to look at those full results. Just because I like to see, like how consistent are those kind of first bytes? Cause it tends to be pretty variable. So this is, our, which one is this Virginia. So if I come down, this is plotting the results for all these metrics for each individual run of the test. Like we default to showing you the median to kind of let you dive in, but like these outliers can be really interesting. So we’ve got one big outlier, 930 milliseconds, time to first byte. The other two are very consistent 197 and 197. And again, this was our Virginia one. So let's look at Canada.  

Canada is, 207, 242, 415 so it's not dramatically different. Right. But it is consistently other than that one outlier where we hopped into the 900s where it was consistently faster, or slower, I'm sorry, coming from Canada. Right. Like by a good 50 milliseconds, which depending on your scale at a scale of something like, ESPN 50 milliseconds can really add up.

(22:09) Henri: Yeah. And

(22:10) Tim: Then let's see what Japan was doing

(22:13) Henri: This is super interesting.

(22:22) Tim: Oh see. That's even more interesting, right? Yeah. Cause now the outlier is the fast one. Yeah. The 299, right. More the other two, we're seeing that 630 and 758. So this would, yeah. I mean, I would say based on these tests right here, like if these are geographies where they've got a lot of users and stuff, it's probably worth looking into that CDN setup and what they've got in place there to see how they can better serve some of those other areas because they're off to a bit of a [Inaudible22:53] from the get go from that server response.

(22:56) Henri: Yeah. Yeah. So

(22:58) Tim: Yeah.

(22:59) Henri: Oh, we have a wonderful guest here. Good afternoon or good evening, Mr. Irish. It's funny.

(23:12) Tim: No talk Irish. How you doing buddy? Yeah, so it is tricky because there are multiple metrics to consider. So the median run is if we go back here, let's see if we still actually display it in the. So the median run is based on the speed index metric right now. There’s a plan, I think it's on our, radar or roadmap I guess, to provide like the ability to potentially change that because it's already there. Like it’s just not documented. I mean I think it's already there still. Hold on. Yeah. So what I just did is I'm going to drop this into a banner so everybody can see Maybe Yeah I am.

If you append that to the end of the URL, you can actually switch the metric that we're choosing that median run based on. So what I just did is I dropped that in, for example, and now it's choosing the median run based on the start render metric. That's not obviously a very intuitive way to do it. That's more of kind of a behind the scenes thing right now. But I think, there's been some talk and plans, in the past to potentially expose that in a way that we could let people change that. so that if you don't want to, if you have a particular metric in mind, which like maybe you would do like the core web vitals are actually a really great thing where a lot of people are like I'm testing and LCP is a primary target for me. You probably want to be looking at the median run based on LCP.

(25:01) Henri: Yeah.

(25:03) Tim: But yeah, that's, it defaults to speed index today, but it is something you can change.

(25:08) Henri: Awesome. And maybe I'll pull out a link for, those who may be asking about the speed index, but it's actually, I was looking at it this morning.

(25:25) Tim: That's all right. So then I guess, we can, so first off that I just wanted to call that out. Because I thought the difference between the location was kind of interesting and a good example of why it's important to test from different locations. Particularly locations, I guess that your audience is already in, although there's always the YouTube effect that is pretty well documented by now the. YouTube feather story of improving performance and then suddenly finding out like they got reach into an area of the globe where they never had that before. Because suddenly YouTube was performant enough that people could use it there.

So just as much as testing locations where you are, you know, you have that large audience, it's probably worth doing the testing and other locations that you'd like to reach people, but maybe your traffic is not there yet just to see if the performance is contributing there. Cause that's an effect that I've heard from many organizations continues to rear its head.

(26:23) Henri: Yeah, And it's funny how that YouTube feather, I guess story, it's about 10 years old, 10, 12, but it's still a significant, it was a significant moment in, sort of like development because there are so many discoveries in that one little moment of what was happening, what the users were willing to wait for, just because they got that little bit of, a trickle of data. And then it just meant that they really had to squeeze some more out, of the, optimizations, they were making at the time.

(26:59) Tim: I always feel like, there should almost be an apology from the YouTube team because you can imagine like your productivity levels before you discover YouTube versus after you discover YouTube. Like, I mean, like that feels like that probably would've been an interesting study too, is what happens to people's productivity in those areas where suddenly they have access to it. But anyway, so yeah. Okay. So first thing we kind of noticed there was, I guess the geographic differences. Like we've definitely got some time to first byte differences that's worth digging into. I can kind of pick one of 'em and kind of take a look at, I guess, diving in and some of these metrics that we noted were a little bit more problematic. I'm going to actually jump to the jump to the mobile one first for this, which is he says, here it is just because I like those stress test ones. I have said this, I think every time I do one of these things, but I do, and that hardware is going to stress test things a bit

And we can see, I guess, we're also pulling crux data. So always looking back at that as kind of an alignment, to see like, are we looking at real issues here or are we looking at, things that may be a symptom of, webpage test? Usually the scenario that I'm most worried about is like webpagetest looks great, but Chrome user experience data does not for that URL. So this is data for that URL on a mobile device. So we can kind of see and what we can see is like it's looking a little worse for where on this test, but the actual site has some issues, not so much on CLS, which is interesting, which means I might not want to dig into that CLS too much on this test because our P75 is actually looking really good. It's worth a glance to see why our CLS is higher here, but definitely largest contentful paint and first contentful paint. In reality, we do see some, issues there.

(28:55) Henri: Quick question before you go, any further, did you want to maybe, in, not so many words, but explain to people with the crux data and we see like the two, arrows.

(29:08) Tim: Yeah, yeah. Good call. So Chrome collects, real user performance data, anonymously, from a subset of Chrome sessions, based all around the world. People using Chrome on a number of different devices and then they provide this data again, anonymous. You're not getting any sort of personal information or anything like that. But they provide this data for sites where the traffic is significant enough for it to be in that subset of data. What I like about this is, it takes not everybody. I think everybody should have ROM monitoring in place. Not everybody does have ROM monitoring in place and that real user data is so important because if we are just plowing ahead, testing things in a synthetic tool with no idea of how it's actually performing in the real world, that's where you get into this situation where like, I'm sharing a webpage test that looks amazing, but I'm also testing on like a Chrome cable connection from a server that sits down the hall for my own, or my lighthouse score is 100, but I tested it on my own suped up MacBook pro over my high speed connection.

Like those are the things you want to avoid. You want to make sure that you're actually getting results that, are surfacing the same issues your, your actual users are. And so that's where I think the crux data becomes really handy and why it's presented here. So what we're doing here is we are for each page, in this case, its We take the actual URL, not the entire site, but that specific URL we check to see if crux has data for that URL. And in this case on a mobile device. So we're not looking at desktop numbers or anything like that. So we're specifically saying, okay, does crux have enough data on, that particular page has accessed on mobile devices? And if it does, we're going to present it down here and we're going to show you where that 75th percentile metric lives. So you can see kind of what that looks like, or at least what Chrome is reporting that that looks like.

Then on top, the black arrow, that's the metrics from webpage test for this run. And so the point of this is to give you some sort of connection to like help you sort of assess how realistic this is. I don't mind. And I kind of I'm okay with if webpage test is off here, in this case, these tests, for example, the webpage test results for this Moto G4, significantly worse in every metric than crux data. I'm okay with that. Like, especially when the P75 shows me that we've got room for improvement for these metrics, because what this means is that, okay, webpage test, we probably like, if we wanted to pinpoint accuracy, we go a little bit more powerful device. Maybe we bump the network connection a little bit, but honestly like the fact that it's showing me these issues and kind of stressing 'em, that's a good thing.  

Where it's more problematic what I don't like seeing is when I run a test, if this black arrow sits in the green and let's say that this came back, look at amazing. And meanwhile, the P75 shows that we're in that needs improvement or poor range. That means I need to rerun the test because now webpagetest is going to look awesome. And my real user data is not so awesome and we're probably missing everything.

(32:29) Henri: Exactly.

(32:30) Tim: So that's how I like to use that.

(32:33) Henri: Nice, nice, nice. And for those who may not be very familiar, I just want to see if we could just get a, a quick explanation out for those. So they could wrap their heads around it.

(32:44) Tim: I appreciate you keeping me honest on it. I dig it. So I, I clicked on the median run and we're going to jump into the waterfall just to see what's going on here a little bit from some of these things. So I've mentioned, I think like, just for anybody, who's kind of new for this, like the flow that I like to look at is specifically gaps. We already looked at time to first byte a little bit. So the first gap I like to look at is between that initial server response. And when we get something on the page. If we see a big gap, that's an indication, we probably have a bunch of blocking resources. Maybe third party stuff, maybe a lot of JavaScript generated things that are slowing down my initial render.  

The next gap I'll look for is between when I get something on the screen. So either that start render or first contentful paint, and when I get my largest piece of content on, cause if I've got a big gap there that tells me that, okay, something is amiss. Like maybe we're loading an image and it's being injected via JavaScript, or it's a background image. And so it gets requested later. Like, we've got to delay there. We should probably tighten that up.  

Those are the first couple big gaps that I look at. So what jumps out here is the first byte to start renders gap is significant. The first contentful paint and LCP, by the way, they fire at the same time. Like that's good, or that seems good. Like maybe we find something in the mechanics here that, would change that a little bit. But right now the fact that they fire at the same time is that's what you're hoping for. You want your largest contentful paint to fire as close the first time planes as possible. So that's a good sign.

(34:16) Henri: I hate to interrupt, hard into the Sixers. Okay. Now let's, let's proceed.

(34:24) Tim: Did it happen?

(34:24) Henri: Just happened,

(34:27) Tim:  Breaking NBA trades who needs, ocean shams you've got Henri.

(34:33) Henri: I wanted to share with someone who would appreciate it.  

(34:36) Tim: No, I appreciate that. I'm going to have to dig in after. Alright. So let's go down here a little bit and look at that start render time. So that's the green vertical bar that we're seeing on the waterfall I'm going to zoom in so it's a little easier for folks to see. These green vertical bars that's where we've got our start render and our first contentful paint firing. So we've got a number of resources that are coming in before then. Now the only ones that appear to be render blocking are request 2 through 7. So we get, Chrome trace they added the render blocking indicator to the trace event. So we can tell definitively from Chrome, like this is a render blocking resource. And if we see that that's the case, we're going to, like, we can surface that here, that's what these little orange icons are.  

So let's see. We've got those 7, a couple things are jumping out. First off we do have the render blocking, I guess, resources. That's always kind of something we love to avoid. The other thing is each of these are coming off of different domains, or sub domains, I guess, we've got We've got A.ESPNCDN. We’ve got secure.ESPN. So with each of those, doing it even more, you've got these teal orange and purple, So that's indicating that we've got that DNS resolution, that TCP connection and SSL negotiation phases for each of these domains, which slows things down, right. If it's off the same domain, we've already got a connection open. We don't have to go through that whole process. And so for each of those, we're getting a little bit of a delay, which is not ideal. Like it's, pushing out that render block a little, or that start render a little bit.

if I click on it for details, it looks like we're spending about 180 milliseconds on the DNS, 172 on the connections to what we're at 350, 450, 530, 540, 540, 550 millisecond delay for these requests here a little bit. Yeah. And ideally we'd find a way to avoid that. I understand it can be sometimes a little complex, But yeah, ideally we find a way to get these onto at least resolve to, the same sort and everything. So we can pull 'em off on the same connection. Yep. And I think this is going to probably become even more important with what I just see. Chrome is considering, network partitioning base, like a more, I'm not going to make this up.

(37:22) Henri: I mean, we may have someone we can ask here. Well, wait, there might be a little delay there.

(37:35) Tim: Yeah. So I can drop this link for anybody who wants to read up on it. But basically there’s some changes coming in the way that, network, state, partitioning, and caching that are going to probably make the impact of other origins like other domains and other requests more expensive from the sounds of it. I have not like caveat, I haven't played with this at all, like to actually measure the impact, but it is, I think interesting that this part jumped out at me. In this note, it mentions that their research on this, at least when they've had network, state partitioning enabled, they've seen a 0.5% like decrease in first contentful paint, which 0.5%, depending on who you are me may not sound dramatic. I think for an improvement for a change in browser behavior, that's a pretty substantial percentage. But yeah,

(38:36) Henri: Someone thought that partitioning already shipped. We could talk about that after.

(38:45) Tim: Yeah, that's fine I'm not 100% sure, but it may have, if, so we can, it'd be curious to watch people's like if they're tracking FCP, just kind of see when that shipped and what the end impact would've been, but yeah. Anyway, so the main point here is that like the cost of those other resources, those other origins are it's getting higher. All right, so we've got that. We've also got another thing that jumps out on that start render here is there's a critical mobile.JS that's being pulled in.

(39:22) Henri: One meg uncompressed.

(39:25) Tim: Yeah. Hefty little file. Yeah. Now this one's not it's. The signal that we're getting back from Chrome here is that it's in body parts blocking not necessarily just strictly render, blocking. Okay. What this means is that like its being loaded in the body in a way that it's going to block the browser from continuing to parse the HTML, once it runs into the script. So even though we're not flagging it as an explicit render blocking resource here, this is still going to hold things up the way it's being loaded. And there's a big chunk of execution like this. Well there's 530. Yeah. 5.4 seconds attributed to it over the course of the page load. But even here, we're seeing that looks like a good two second chunk to me up front.

(40:14) Henri: So would something like that be a good candidate for deferment.

(40:19) Tim: Depends what they're doing in it. Like in theory yeah. Like in theory, you want to push that execution out as far as you can. if though the fact that they've got this named critical mobile is kind of, I mean, a hint that this might be like critical things That have to happen for the mobile layout, in which case deferring is only going to make things worse because then what you're going to get is you're going to get something comes down the JavaScript comes in, changes, everything, things shift around and it's all pure chaos. So if that's the case and I'm assuming that's what it is based on what I'm looking at here, then it's less about deferring and it's more about like, what could they do to move some of this stuff out of that client side JavaScript. So JavaScript doesn't have to do that much work cause that's a lot of work, to be doing on the main thread, to shift things around and, change things. And so if you can take some of that out of the equation, it's going to make a huge impact here too, on that start render stuff.

(41:17) Henri: Also curious how much that file is actually being parsed, in completion. You know what I mean? It is a one Meg file uncompressed. So I would imagine that, it is, as you said, it's a hefty, JS file. How much of it is, actually being consumed as a critical resource. Now they are calling it ESPNcriticalmobile.JS. So it's obviously super important.

(41:47) Tim: You mean how much of that is being like actually used versus not?

(41:50) Henri: Yeah. Cause a meg of JavaScript is a lot.

(41:55) Tim: Yeah. It's not little. Yeah, no, that's a good question. How much of that is being actually used? You could check that. I always think the code coverage stuff in Chrome is pretty good for that. Yeah. We actually do have the ability to collect that coverage in webpage test. We just haven't figured out yet how we want to expose that UI as well. Yeah. I mean, that would be one interesting thing to see is like, yeah, you're right. That's a lot of JavaScripts. How, how much of that actually needs to be there? It's worth noting though, too. Like when you get into JavaScript execution, like this is the thing with bundle size, like we talk a lot, the industry focuses a lot on bundle size for JavaScript, for a good reason. You never want to send extra stuff or unnecessary stuff down.

It's worth noting that execution does not necessarily connect and correlates to the actual size of the bundle. Like its pretty common for folks to see they have big, massive chunks of execution on the main thread, throw a ton of work into stripping down the size of the bundle and then discovering that like the execution actually didn't change much at all. Yeah. Because the amount of work is still there. They just got rid of that unused code. The unused code wasn't doing anything away. So that's, just putting that out because that is a very, I see that a lot where a lot of orgs go down that path right away. And it's a good thing to do. But with any performance improvement, you always kind of have to keep in mind, like, what are the expected outcomes that I think I'm going to see so that I can measure the impact. And if you go into bundle size optimization, hoping to make a massive improvement, in reduction of JavaScript execution, that might not be the right way to do it. But if you're looking at measuring, like how much actually we're sending down over the network? Yeah, Bundle size is going to be a great thing for you to focus on. So

(43:44) Henri: Good to know.

(43:44) Tim: And Paul did point out, looks like the, network, state partitioning is still an origin trial. So yeah.

(43:53) Henri: Thank you, Paul Lee.

(43:58) Tim: I want to look to like right at the LCP. So I'm going to click on that. That takes us to the vitals page so we can see a little bit better. Exactly what's being how happening here. Oh, that's fun. This is, apparently LCP is firing on a mammoth. What is this like a--

(44:15) Henri: Oh my God. Is that what I think it is?

(44:18) Tim: In line is this just an overlay? Did you have an overlay in that film strip?

(44:27) Henri: I do not remember.

(44:35) Tim: We got a big old ad coming in?

(44:37) Henri: Yeah.

(44:37) Tim: That's where our shift is by the way, Look at the--

(44:39) Henri: Yeah, that was amazing.

(44:40) Tim: Move Along right here everything drops down right there. And then coming back over, it pops in and then it appears to disappear again that's fun. So I don't see like an overlay or any, but that sure looks like that's what we're kind of dealing with here. Right. We've got a position, absolute, inline image top zero left zero with 99 vertical width units, vertical height units. Yeah. That's kind of funky

(45:18) Henri: Basics for wowsers

(45:21) Tim: And because it's an inline, we can't actually highlight it in the waterfall. So this waterfall would normally highlight that request if we can see a request, but it's in this case an inline image. So there is no request to highlight. Oh, okay. But the waterfall still truncates when LCP fires. So we can still see like before LCP, this is the only things we need to worry about. Like the stuff after that matters, but not if we're looking at LCP specifically, this actually zoomed in view also helps a little bit with the start render in this case.  

(45:49) Henri: Yes, Which I love.

(45:52) Tim: Yeah. And it's just, I didn't want to say this out loud, but yeah, that, that does look like that might be the situation. maybe trying to game LCP a little bit, like unfortunately we've seen that where folks will drop in like an inline image and do some weird stuff with it to try and make it trigger LCP a little faster totally possible here.

(46:14) Henri: I remember there was that, a few blog posts, being shared with, possibly like some transparent imaging to gain the LCP. I I'd have to go back and double check. Not that I want to share fake news folks. But, I remember seeing, some hocus pocus kind of, core web vitals, gaming, taking place. Yeah opacity, something like that. I don't know something around transparency that people were just kind of like, oh.

(46:45) Tim: It makes me have all the more respect for the folks that are working on those metrics at the browser level. Because like, all right, like it's not trying to pick metrics that are going to be guidelines for the entire web. That is not for the faint of heart. Like you've got to pick metrics that are pretty generally applicable, generally relevant. That's hard. But then all of these things like that's, like keeping tabs on all the weird things that people will do to try and game those. Cause given a metric with a goal attached to it, that's going to happen People will try to game it. So dealing with those or like the odd edge cases and stuff, that's hard work.  

I think it's probably worth pointing out. Because they are constantly trying to address things like that. If you're like a lot of folks nowadays and paying close attention to that, core web vital stuff as your primary kind of objectives. This is I guess, a really long URL, but basically they do have the change log for the changes that should be direct relatable to these metrics. Most of these are around metric bug fixes and definition improvements. I think, looking now I'm not seeing. Yeah. Okay. So there are occasionally they'll call out changes to the browser itself that may impact these metrics as well. Like in this case, this is not a change to the metric, but it affects the metric. I think I saw a Twitter thread about this recently too. I'd like to see a few more of those pop up, because I think sometimes, well, like the networks state partitioning could be one that they could put by first Contentful paint or whatever, but this is well worth your time to keep up on if you're ever looking at your data and wondering why all of a sudden your, crux stuff, is looking differently.

(48:35) Henri: I had a dramatic jump. Yeah, I'll have to go back, see if I can find it and I'll share it. well, I mean, not that I can, I don't want to share craziness, but people have been certainly trying to game, LCP, amongst I'm sure others, but that might be one of the ones that might have like the easiest access to manipulate. Yeah. Anyhow

(49:01) Tim: Fun stuff, man. Yeah. Always like those kind of edge cases. Anyway, I think it's interesting. There's a lot to learn from that kind of stuff. I think too. So not like that you should do it, but a lot to learn in terms of, how the metric works and how things are measured and stuff like that I think is always pretty valuable.

(49:18) Henri: Yeah. Yeah. Well that, was to be expected.

(49:24) Tim: That was our ad that we saw popping in is most of the, the biggest shift window by far. We see a few other and I think that's mostly the same thing. We few other small shifts that kind of kick in. now, one thing worth noting is sometimes these hovers, like this one we're highlighting where the shift occurred. But if you look at the gif or I guess the hover, it's not a, they give the hovering over, it looks a little odd. Like something else is shifting, whenever you see that, that comes down to frame per second, like we're recording at 10 frames per second because anything higher we've seen overhead and it starts to impact the performance results. But if you're debugging, CLS, it's always worth, dropping, the, frame per second parameter Into the webpagetest URL and, collecting it 60 frame per second, just acknowledging that it's going to slow most your metrics down, but at least it'll give you that granularity for when you're trying to debug layout shifts, which is--

(50:22) Henri: Nice.

(50:23) Tim: Necessary.

(50:27) Henri: Collapses an icon. But I count out me again. Sorry. I'm just reading some of the comments, making sure. Cool. I mean, it's, at one point I was looking at the waterfall and I just saw like a bunch of domains and pumping out, some images and--

(50:48) Tim: Oh, up at the.

(50:51) Henri: Yeah Somewhere. I thought I'd seen that.  

(50:55) Tim:  all these things.  

(50:56) Henri: Yeah, exactly. Like from, I don't know what was that 16 through or not even 16.

(51:01) Tim: Yeah. There's quite a few me looks like, 10 through 39 all look like they might be roughly the same with a couple maybe exceptions.

(51:11) Henri: Yeah. Bizarre, you know?  

(51:12) Tim: Yeah. It looks like they're. So they've got a combiner, so let's see what they're, if I click on one of these, what does it look like? A Football, a basketball. Okay. So these are the little icons, then let's go back to the, yeah. These little icons, I guess that you're probably seeing off to the sides here. now I mean, it looks like there's some sort of a service that's that they're using to kind of Maybe generate those or pull them out, I guess maybe that's I don't, I'm not entirely sure obviously what combiner is doing, but they're pulling all of these from that thing,

(51:48) Henri: But we're looking at like downloaded bytes one and a half KB.  

(51:52) Tim: They're all tiny.  

(51:53) Henri: Yeah. All tiny, but it's almost like the zero sum. I know Paul, always likes to point these out Mr. Calvano by the way, because there are a few Pauls in the audience, it's like, 2 kilobyte downloads for, one and a half up. And I was also wondering why they wouldn't just combine that somehow because that's a lot of requests for just a lot of little teeny weeny icons.

(52:21) Tim: I think that's for what it's worth like H2 HTP2 should make the cost of a bunch of little requests, a lot cheaper than it used to be. But there's definitely a point where that starts to not be the case. I think the there's the, I would say kind of a famous or infamous depending, I guess Khan academy thing, right after HTP2 where they split all their bundles from like 30 bundles to like 400 and something of them and saw a huge degradation in performance. And that's not because like H2 didn't do its job. It's just because yeah, if you've got 400 and something different requests, that's probably not great. Yeah. So it'd be fun to do some experimenting there to see like yeah, if you did some, approach this a little bit differently instead of having a bunch of, maybe there is some impact their performance impact.

And I imagine that these are assets that are not going to change often. That's another part of the assumption. Like yeah. Or if they change, they all change. Yeah. Because if we're seeing, for example, I mean, I can't imagine that Facebook logo is going to be pretty static. Right. If these were all individual files and they did the, because any individual file could change and you only want to lose that one from cache, you can make that argument, but these feel like they're going to be pretty static things that are not likely to change often. So Yeah. There's probably some experiments that could happen there.

(53:39) Henri: Very fair. Very fair.

(53:43) Tim: Yeah. And then yeah, I was going to say the other thing that jumped out was that total blocking time, which is in the visual here, these are just the requests that have JavaScript execution associated with them. And again, what jumps out there is a couple ESPN like for the JavaScripts coming from inside the building kind of thing, you've got your critical mobile. Oops, that's right. If I click it, I go through the timeline, we’ll go back here though. Critical mobile has quite a bit of chunk of execution there. The defer mobile that's coming, thankfully after start render has a huge chunk of execution associated with it. And if we look at like where we're blocking time based on origin, we can see that it's definitely that ESPN and CDN is contributing to vast majority of it. Third party stuff is, I mean is almost its minuscule in comparison in this case.

(54:43) Henri: Right, right, right. And, for the, oh, so we have the URLs on as usual in any waterfall we have them on the left. Did you want to distinguish what's in black and what's also in blue?

(55:01) Tim: Yeah. So black is, request belonging to the mainframe, blue is stuff that are triggered from, it could be an eye frame or something like that, but they don't belong to that mainframe. They're being triggered somewhere else. So I mean, guaranteed, they probably have some high framing kind of stuff going on and that's where you're seeing the blue. So it's not the main document. Yeah. And yeah, if I click on those, by the way, you do get the actual underlying Chrome profile, if you're familiar with Chrome dev tools where you can look in and that's where you start to see like here's that ESPN defer mobile script, which is 5.6 seconds of execute just to do that initial execution. So yeah that would be worth digging into too, like what's going on there and can you strip some of that behavior out of there as well?

(55:57) Henri: Nice.

(56:03) Tim: Yep. So anyway, I guess, we kind of just stuck with ESPN today instead of jumping on the super bowl. But I think like the things that we would see that we'd want to look into would be, the time to first byte around the world, like what are my key locations, get a input, look at how that's actually performing. It looks like we have some delivery changes there like that we should, dealing with. We saw there's a big delay in that initial start render, which is a combination of things. It looks like there's a lot of, there's some massive JavaScript execution occurring, from this critical mobile, which isn't ideal, and a fair amount of, the blocking initial requests are part of it, but it's, more so I feel like my guess is that it's more of this critical mobile execution and maybe some of the fallout here that's happening after that's getting queued up after that execution. Cause these things should be pretty non-blocking at this point. And it looks like they are.

(57:15) Henri: Yeah. I mean, it sounds like, clearly, they don't get off to a good start, which then sort of, creates this sort of cascade of, late coming everything.  

(57:32) Tim: Sure. Yeah. Which is, pretty far, I think that's the nice thing about like focusing in, on some of those initial metrics, like if you can improve your start render time, there's a trickle-down effect, right. It improves largest contentful paint and all that stuff, same with, improve your time to first byte everything else benefits. So it's fixable for sure. Yes.

(57:53) Henri: Yes. But, essentially, as surge just getting at the idea that, I mean, nothing really, got off to a good start. So, we ended up just having these, these numbers that were in like the teens in certain cases, so, the speed index like wow, 15 seconds.

(58:16) Tim: Yeah. Well, that's going to be a little, your speed index in particular is going to be a little messed up. Because speed index is trying to look for a percentage of the screen as it gets painted out. Yeah. And the faster you get the majority of the screen painted the better, but what it does is it compares that to the final state. And so I'm guessing where speed index can get messed up sometimes is if there's movement. So if there's carousels or in this case, the ad coming in. So, I mean, first off we don't really have a gradual progression anyway. It seems like we jump from zero to a big chunk right away. Yeah. Once we get to that 8.5 seconds, but it's probably also exaggerated a little bit.

It's not going to be a great score anyway, but it's probably exaggerated a little bit. Because I'm guessing our speed index is going to be comparing to this state, which shifts quite a bit in between. Yeah. And so speed index is going to get pushed out quite a bit. Yeah. But even without that, the fact that we go from, that sort of nothing there to 81% there within only after like 8.5 seconds is not going to be awesome. So here's the chart that shows the visual progress, which is what we'll be looking at for speed index kind of stuff. And you can see we're going from nothing, nothing, nothing to whoop we got a bunch. Then we drop back down because now a bunch of that stuff has been shifted down and then we sort of gradually make our way back up.

(59:40) Henri: I remember early on, I just never understood how the speed index could just suddenly just go backwards. And then I realized everything with the shifts and the ads and whatnot, and I'm like, oh, okay. So ultimately the speed index really, I used to go around saying the speed index could be very indicative of like several problems, not just one. And I think that's, that's one of them right there. It’s like a little bit of CLS. It's like a little bit of, bad start render. It's a little bit of, like you said, jumping from 0 to 80 over like two frames.

(1:00:19) Tim: It's a good, it's certainly not, I think it's one of those metrics that almost got a little lost in the CWV shuffle, but I actually really still like it, like one of the things that thing, the fact that movement can trip up the speed index metric was something that, had been brought up in the past, by people who maybe they didn't like monitoring speed index or because speed index could be influenced by that. Or maybe like from one test to another could vary based on that. I almost feel like the fact that we now can measure things like CLS, which we didn't really have a measurement for shifts at all before. To me actually it gives more credence, I guess, more weight to the speed index thing because yeah. Speed index is getting pushed out because of what is amounts to shifting. Then the fact that it's pushed out is a really, really valid thing.  

Now there are differences like a carousel that properly moves. Isn't technically, a disruptive shift, for, at least from that perspective. So there are still things where yeah, I could get a little tripped up, but things like this, this is highlighting a very real user problem here.

(1:01:24) Henri: Absolutely cool. Yeah. ESPN Disney it was, a pleasure dancing.

(1:01:35) Tim: awesome. Yeah. It's fun. It's I am, yeah. I like doing this stuff.  

(1:01:41) Henri: I mean there's always stuff happening, and  again, it's about sort of, questioning some decisions and, hopefully testing and, seeing if that was the best decision possible, who knows, so, that's what, you're here for. I'm here to comment away as well. And, they'll be plenty more to check out. Who do you got your, money on this weekend?

(1:02:09) Tim: I don’t know. I actually haven't, I mean, I'm going to be honest with you, man. Since having, kids, I haven't watched a lot of football. Okay. I got myself, I watched a little bit and stuff this year, but I haven't stayed up on it too much. If I had the bank, I'd see the Rams, but.

(1:02:30) Henri: Yeah. I think it's going to be an interesting game. I think the Rams are playing at home, so that's going to be pretty wild and they're in a big fancy stadium so that, that should make for hopefully good, television. That said, it is the last weekend of the year. So again, that's kind of like why we wanted to take a look at this, these are, big, media events and, they're pushing everyone to their sites left, right and center, through advertising and, we didn't even get into some of the sponsors who are going to, get their ducks in line for the weekend and, who are probably working overtime right now to make sure everything's launched on time. Yeah, I mean, any closing words?

(1:03:21) Tim: No, not too much. I think. Yeah. Well, we got, two more so not next week, but the week after we'll be back.

(1:03:28) Henri: Definitely, I will say, look for some more sort of content just dropping in randomly. I mean, we definitely do enjoy, doing these, twice a month, live, more detailed audits, but, I think we'll be also just splashing in some content left and right. Kind of unscheduled, so go check us out on, on YouTube obviously, where the sort of, it's like, I guess a repository of all our audits and other things that'll be taking place. So I think it's, what is it just, whatever. We'll tweet it out. And we'll have this recording cut and edited, probably in the next couple days, if not by tomorrow, something like that.

Folks check us out every two weeks. Follow us on, Twitter @realwebpagetest for more, information, and more details and, yeah, we should have, some more content not lives, but more content dropping in the next, like, I'd say like 48 hours, something like that. So, yeah, give us a follow and we'll keep sharing.

(1:04:39) Tim:  looking forward to it, man. All right. Thanks everyone for tuning in.  

(1:04:42) Henri: Cheers folks, take care.

Site Speed
Core Web Vitals
Website Performance

Sign up for a FREE WebPageTest account and start profiling


Henri Helvetica​
Head of Community
Tim Kadlec​
Director of DevEx Engineering