Third Party Performance Impact with Tim Kadlec

Watch the session

Third parties can wreck havoc on performance! In this live audit, we walk through some real-world third-party issues to look out for, how to test their impact, and a few potential solutions as well.

Want more info? Pls follow us on Twitter: Tim Kadlec and WebPageTest

Sign up for a FREE WebPageTest account and start profiling

Record on


Tim Kadlec​
Director of DevEx Engineering

Third Party Performance Impact with Tim Kadlec

Event Description

Third parties can wreck havoc on performance! In this live audit, we walk through some real-world third-party issues to look out for, how to test their impact, and a few potential solutions as well.

Want more info? Pls follow us on Twitter: Tim Kadlec and WebPageTest

Sign up for a FREE WebPageTest account and start profiling


(00:04) s1 All right. Hey everyone. Thanks for, jumping on again. Yeah, boy what are we going to do? Today I wanted to kind of walk through a few, third-party stuff. Now, normally we tend to go a little long. We've had a few that are like an hour and a half of these, and everybody just seems to hang around, which is absolutely awesome. I did want to say though today, I'm probably going to try to keep it concise, to within an hour. I know HB archive, which, if anybody's not familiar, HB archive is an amazing resource, that, catalogs, not just performance, but like just characteristics of just millions of, sites and, tracks or records, those characteristics (01:00) keep track of them over time. So we can always go back and kind of learn from, exactly like how those sites are built and how they've been performing and, security and accessibility considerations. It's just an incredible resource.  


Anyway, every year they put out an annual report, the web Almanac, and so that involves a ton of people from the community coming together and digging into HTP archive information. and, so what they're going to do is actually stream some of the analysis, which should be kind of interesting because Rick [01:39 inaudible] is going to jump in, dig into this just wealth of data and start like processing it, which I think is pretty cool. So they're going to go in about an hour and then I'll try to, make sure that we're set. So if anybody wants to jump off and do that, they can, yeah, it should be interesting. And this is powered by webpage tests. They're running webpage tests behind the hood to (02:00) capture all this stuff. So it is also relevant and related.  


Anyway, today, what I wanted to do is focus on some third-party stuff. third parties come up a lot, certainly any of the past streams or podcasts that we've done, there's been a fair amount of like third party, things and issues sort of related, and pulled into those different sites and pages we've looked at. But specifically today I wanted to zero in on them and just show a few ways of like things to look out for when it comes to third parties from a performance perspective, as well as ways to sort of validate their impact and, test it and things like that. So, for the site that we look at primarily, we might look at another one today is the, I thought would be an interesting one.


(03:00) So this is what we'll be digging into. Somebody sent this over and as I've mentioned before, I always got a little bit of a soft spot for, looking into, government or community kind of sites and stuff like that. I always think they are kind of fun. So, we'll take a look at this through the lens of webpage test. I'm going to drop this in. We'll use our usual midrange Android device emulation here, Chrome G4, do a 4g network. Somewhere in the US is fine. It doesn't really matter. We'll just stick with Virginia for now. 3 test runs. Nah, Yeah, yeah, sure. No, we'll make it more interesting five test runs. Why not? and we'll do first view only for now for the sake of making it go a little faster, capture video, (04:00) which is we want that always checked because we get a, all the goodies around the filmstrip and comparison and stuff like that and advanced metrics that way. And then we'll add a label here. We'll say census 4d.


I don't think we need to worry about capturing anything else specific here. Like nothing too fancy. We're just going to run a basic test for now. I'm going to open that off in the background so I can come back to this tab if I need to modify any test preferences or do anything like that. All right. So this should just take a few seconds to go through. Oh, one thing I was going to say actually, while that's running, I guess, the manual testing, (05:00) we always love our real devices. And, those are tucked away right now in the Dallas location, primarily for Android devices, at least. And you'll notice there's like a G4 and G6. We did tweet about it this week, but I'm pretty excited about it. We are in the process. Actually, there's some work doing on it this morning. Getting some new physical Android devices, propped up from a New York location. So we're adding the Moto G7, and Moto E6, which are, mid to low-end Android devices. There's sort of that next step beyond the G4.  


If you're curious, a lot of that comes from, Alex Russell's annual excellent research into mobile performance and network connectivity in devices. I mean, he really just goes deep on this (06:00) to try and figure out what is a good representative, 75th percentile sort of experience, and highly recommend reading it. But the conclusion he came into in the latest thing was while the G4 is still a really good low-end one, sort of that next step is those E6 and then the G7 is a hair better. So those are the two that we'll be adding, and those should be coming to, like I said, a New York location very soon, so expect more real devices available on .org, which will be nice.


All right. So here are the results for our test. Full disclaimer, I did validate at least when this was sent over that there were third-party problems, which I know there are some so we can dig into. But I haven't looked like in super detail, but there's enough here that looked like there would be interesting. (07:00)So, here's our results. We're looking at our median run based on the speed index metric. There are a few things jumping out from the metric perspective. We have a really late largest contentful paint total blocking time, which means that it's, either very JavaScript-driven or there's a lot of third-party JavaScript that's coming into play here, some shifting. This is interesting. This is always interesting to see I'm going to zoom in so you can see that better.  


Start render and first contentful paint. We have a little bit of a disagreement here. So first contentful paint. That's the first piece of content that gets painted out onto the screen. Now that is a metric that is provided by the browser itself. And so the browser is saying, okay, here's our first paint event. We can tell that some piece of content was painted out. But sometimes it can get a little tripped up like it could paint, for example, something that (08:00) has an opacity of zero or the visibility is hidden. It's technically still a paint event. So it fires, but maybe nothing's actually out on the screen from a user perspective.  


Now start render that is not provided by the browser. That is a purely synthetic measurement that webpage test records, what we do is that we capture an entire filmstrip of the loading program. So, we look at the moment the test starts and we look at each individual frame. And when we see something in the frame, the first time we see something appear, that's when we say something starts renders. So we're not looking at any underlying paint events for this metric. This is purely about when does something appear on the screen. And so when we see a gap here, that's always a sign that there's something kind of interesting happening. Something is technically being painted, but it's not visible.  


This is typically due to client-side AB testing. (09:00) because often what they'll do is, if you're injecting, if you're using AB testing on the client-side, you don't want the page to shift around, when you're manipulating the content or adding a banner or whatever you're doing with it. So they have these ideas of like this anti-flicker snippet or some, usually, it's, maybe something built into the script or something you inject on the page directly that says, we're going to make the page, opacity zero, for example, until those experiments have a chance to run. That's typically what causes this.  


We can look at run5 and probably dig into the film strip view to kind of validate some of that. We'll change the thumbnail interval to 0.1 seconds. Okay. So here we can see that film strip capture view (10:00) that we were talking about. Sure enough. We're not really getting anything onto the screen until around 5.5 seconds, but we saw our first paint fired according to the metrics way over here at 4.3. So we've definitely got, a significant gap there. so if I-- there's a little red line here off to the left of the film strip, boy depending on how I zoom in, you can or can't see that kind of can see it there. Right there, that little red line, that lines up in the film strip with exactly what's happening in the waterfall below. So I'm going to line that up to where we get our first actual start render and then come down here.


All right. So we also have things on the waterfall. So this darker green line that's by the red line, that is our start render metric. This lighter green line is our first Contentful paint. So we can see there's a bit of a gap between the two here (11:00) and right there is our culprit. This is Adobe tag manager probably causing the problem. Cause tag manager-- there's a great post from Andy Davies. See if we can track it down quick. How's my Google-Fu I think it's this one. I guess he doesn't give the exact, still a good post. Adobe launch, thanks, Sean. Is that what we're looking at over there? Okay. Hi Sean, by the way. (12:00) Okay, so my guess is that's probably our culprit. Let's go back to this here, view page source. Opacity. And let's see if there's anything in the actual HTML itself. Sometimes it's a CSS block or something like that. Nope. So we're getting it injected. And yeah, Deshaun's point if this is A B or personalization, then that's exactly like probably as part of that snippet itself, it's injecting something that's going to be potentially turning off.  


Yeah, here we go. So this is jumbo mumbo code, but right here is this little line saying body hidden style, body opacity zero. (13:00) So this is going to be dropping in, I mean, without waiting through the code, we know this is here. So at some point, they're injecting this opacity equals zero part into the page. And what that's going to do then is that's going to hide our page until something turns it off. usually, it's1 the script itself, like either has a timeout associated with it or whatever personalization or experiments that need to run are completed and executed at which point they'll toggle that opacity back to zero.  


So what we have here is actually somewhere in here probably where that first contentful paint occurs, we are actually getting that first paint event to happen. But the page is hidden because of that opacity zero. So nothing shows up on the screen. This is always an interesting one to me because depending on how you're measuring things, this is one of those issues that you may not realize that you have. (14:00) like if you're looking at your ROM data or you're using any sort of tool that doesn't do this visual double check here, and it's just relying on what the browser is telling you, your metrics are going to look great. They're going to look fine. I mean, not great, great like 4.5 seconds isn’t great for first contentful paint, but you get what I mean. Like, you're not going to notice you're going to drop this in your first contentful paint seems like what it was before. It doesn't seem to really matter or make that much of a difference. It's only if you're looking at this other supplementary metric that you're ever going to notice you have a problem to deal with in the first place.  


That gets back sort of, the tool and like being aware of how tools are measuring things. Actually, that's kind of a good point too. Let’s run another test on this one. We're not going to change anything other than the way the network throttling is applied. I think it's under chromium. (15:00)Yeah. Okay. So we have this little option here, to use dev tools, traffic shaping, not recommended, and I will explain why, and you will see why in a second. So the way that, network. We can backtrack on that really quick before we get into throttling.  

The giveaway, that it was Adobe launch is more my familiarity with generally tag management, offenders. Like it's often, if we go back to the waterfall somewhere, he says jumping from tab to tab. I know like I've seen it with Google optimize or optimized, the, Adobe stuff like there's visual web optimizer. There are a few common culprits. So what I was doing to try and figure out (16:00) first off the giveaway that something was amiss was that we have a gap between first contentful paint and start render. That's always a hint something's off almost always client site AB testing.


The next thing was let's scan through these requests. Well, the first bunch here that are all first party, none of them look like obvious offenders. Then I was looking for like that third party sort of like who would be a third-party provider that might be doing this. And as soon as I saw, the Adobe DTM domain, I just recognized that from prior audits. What you would be looking for here specifically, if you were coming in without sort of that backdrop of knowledge, is something like this, though. Like a render-blocking JavaScript file. We can see the render blocking with the orange icon. So a JavaScript file that stops the display of the page initially probably has a good chunk of JavaScript execution right after it, which is what is going on here too. This pink bar right after the request is telling (17:00) us that's all JavaScript execution.  

So render, blocking big chunk of JavaScript execution, all of that occurring before our first paint is probably a pretty good clue, that it's contributing in some way in this scenario. I guess the other thing we could have potentially looked for, although I don't really see it. Oh, yea I do, and this is a tiny little bit. Going to have to really zoom in. That was almost a Winnie, the Pooh tigery sounding. Woohoo. I don't know what sound that was it's really tiny. So like, it's hard to see in this case, but the other thing is like we could look at the start render line, that green line, and look for any JavaScript execution that happens right before start render. Because that means that that little bit of script execution is probably what triggered, the ability for the page to be displayed again.  

So if I look here, there are these bitty little pink blocks. (18:00) Oh, they're so small. Like it's tiny execution. But you can see them here and here. These 2 requests have a little bit of execution right before start render. And if we go back over, these are also served by Adobe. So my guess is one of these is what's toggling the actual visibility, but that would be another clue that it's the Adobe thing that's causing a problem. So those are all good clues to look for, for that kind of debugging situation.  

Okay. So going to the throttling like this is running in the background, apparently you still oh we've got a little bit of a cue. Did I do emulated? Yeah. Okay. Oh no. I did the real device that's why, let's do emulated o we're comparing apples to apples my bad. Our real devices are limited by the number of devices we have available at a location, which (19:00) means that sometimes the accuracy is awesome. Cause it's a real physical device, but it does mean that the queue can build up a little bit. Because it's limited by how many devices we can throw at it.  

Chrome emulation means that we're doing a lot of fancy stuff there to emulate the CPU and proxy and get that, really aligned with what a real device is doing. We're doing a lot of work on that side of things to make sure they're very accurate, but those are, AWS instances, which means we can scale those up and go through a cue faster, which is why sometimes if you ever run into a backlog on the real device, versus a backlog on the emulated stuff, you'll notice the emulated stuff works a lot faster.


Ooh, nice Eagle eye. We'll come back to that in a second maybe. JQuery twice that's probably a third-party thing too, but we'll look. So throttling, (20:00) webpage test throttles the network to simulate, like in this case, our 4G connection. We throttle the network at the packet level. Like we, we own the system, we own the machine, we're running it on. We can throttle at the packet level. We don't have to worry about that impacting other applications running or anything like that.  

Contrast that to what dev tools does, dev tools provides network throttling. Like if you open up Chrome dev tools, you're able to throttle it to simulate a 3G connection or whatever. That throttling though, because they can't take over your whole machine and kind of manipulate at that level. That throttling lives between the browser render process and the browser network part, which means that there are some things that dev tools throttling doesn't actually apply the throttling to.


So when I say throttle, a 3G connection on dev tools, it's going to do very different things than if I was throttling in webpage tests where we're actually throttling (21:00) at the packet level, which is a much more accurate representation. That’s not to throw any sort of like testing with network throttling inside your dev tools is still great. Better than not testing at all. But when we're talking about the way tests are run, we have to pay attention to that because we might be able to come to different performance conclusions, depending on how that works.  

So for example, what I just did is I ran the same test, same network throttling, 4G simulator, 4G connection on an emulated G4, same page. The difference is this time I chose to use the dev tools, throttling capabilities instead of webpage test packet-level throttling. I wanted to show kind of where that fallout happens then in a waterfall. So if I clicked through here, and let's pull up just the waterfall view for the other test quick. So we can kind of look side by side here and we could do a in fact let's okay. (22:00) We will, we'll do a comparison in fact. Let’s grab, this test result. We're going to pop open our test history actually. It's just there nothing sensitive, hopefully. Good.  

All right. So this is test history. I'm going to grab this URL, which is our dev tools throttle. Ignore this one. This one here was a mistake. So it's a mistake. So I'm going to hide it and pretend that it never happened. It's like a Bob Ross thing. Like hide your mistakes. It's a thing.


Anyway, we're going to grab the dev tools, throttle test in the non dev tools, throttle test, select them both and hit compare. What we're going to look at here is this is our film strip comparison. So we can see with dev tools, throttling applied. (23:00) It is a much like we get out onto the screen much earlier here, even though we haven't changed anything about the page or really the network should be the same. It's just the approach to throttling that's changed here. So if we go down to the waterfall down here, we can kind of overlap and see why that is.  

So when we have network or packet-level throttling, look at the connection costs. So that's our DNS plus our TCP plus our SSL. Like for every domain we connect to, we have to go through that process. That's our teal and our orange and our purple.


With packet-level throttling, we apply throttling to it. It impacts every part of that process.

Like the request itself, the connection costs, all of that gets impacted. With dev tools, throttling because it sits between the render and network process. It doesn't have the ability to apply that throttling to things that happen entirely in the browser at the network layer. So if I go back over here, (24:00) even though the actual content downloads are getting impact, look how small the connection costs are. That's because the throttling isn't getting applied there really at all.  

So things like these initial connection costs to the Adobe script, for example, seem really tiny the tiniest little sliver of connection costs here. We're looking at almost, I mean, it's almost nothing or here, this redirect is another good example. Here's a redirect where we're getting a 31 millisecond, all that connection time to make that, get in back. It looks small. The redirect doesn't look like it's impacting us very badly, but with more realistic network throttling we can see that the redirect is actually costing us 700 milliseconds compared to 30 milliseconds.


I'm saying that this isn't necessarily, something that, not throwing shade at network throttling from a dev tools perspective at all, (25:00) it's still an extremely useful tool. But it is worth making sure that when we're using our measurement tools, we understand how those measurements are being derived because they will impact our decisions. Like we may look at dev tools, throttling and think, okay, redirects are not that big of a deal, or it's not that bad of a problem that we're connecting to this third party domain. When in reality, it's actually a very expensive thing for us. so that's just, I wanted to get that's a little bit of a sidebar, but anyway, relevant I think just so that we know kind of what we're looking at.  

All right. So let's go back to our original test here. Let's look here at this one, because somebody was bringing up the different JQuery versions while it's timely while we're talking about it, let's figure out what's going on there. That was a keen eye, by the way, we didn't have that waterfall up super long. You were, scanning nicely done. All right, so we're going to click on run five and oh, we're already on run five he says ha ha ha (26:00). JQuery Let's see what we got here. Huh? Look at that. Okay. So we got JQuery, being loaded here. We've got J query being loaded here let's find out quick here. Bump bump what is this? Oh, that's not a JQuery. That's just, okay, so this one's not bad. That one looks like its JQuery, but it actually seems to be just a little map. That's not bad. This one is JQuery 1.2.  

So first off these two, not a big problem. The first two look like they were the same in parallel, not an issue. However, while looking at that, it does look like there's a J query down here as well. JQuery 1.9.2. So a different version altogether. (27:00) Yeah. See, there might be even more because here's another one that's being pulled in off of the, Google APIs, part of another widget. This is version 3.3.1. Let's go down here. If we go down, there's this section here where we have all the resources and a sortable table, there are a few things that might help here. Like one is like, I like to, when we're talking about third parties, in particular, I like to sort by CPU time just to see where we're spending all of that. So you can see here, like again, Adobe is leading the way, that little mapper, I think that was the mapper, right? Oh no. This was the actual library itself. So that JavaScript, that JQuery library is getting a fair amount of CPU time as well. But here I think also we see a few more of the JQuery things kind of popping out. So again, JQuery, JQuery 3.3.1 JQuery 1.4.2. (28:01) Yeah. There's quite a few different versions of the library. All being pulled in. it looks like some of that's different components.  


Like The, Ajax one is being requested from a third party, but part of a widget. So see how the text blue that text blue means it's not belonging to the mainframe. So we're looking at like an eye frame or something like that. So this widget is making a call out to that particular version, up above these other JQuery versions. This is what client libs census client libs, client libs foundation, client libs. This is our small one. That's our mapper. Sorry, here we go. Client libs, granite. (29:00)

Okay. So we have probably a different situation here, maybe where, different components or different pieces of the application are pulling these in it is interesting.  So we do have multiple versions of the library and that's always a fun one to debug, like to figure out, like usually, it's one of those things that I'll check for pretty often, especially if there's a lot of third parties involved. But identifying it unless they're looking at the waterfall can be tricky because what's ultimately going to end up happening probably is that when you check to see the presence of JQuery, you'll see whatever version happened to load the latest. And again, it's kind of like the start render thing where if you're not looking a little deeper, you might not notice that there's like five or six different versions of the script coming in all together here. But good eye keen eye on those.  


(30:00) All right. So let's go back to our third parties here. And let's see if we can pinpoint just how much of an impact these things are having. Now there are a few different tools inside of webpage tests that we can do to see, performance impact of third parties and sort of validate and compare. And I think there are two primary third parties in particular here that we could test on. So one is the Adobe stuff that is actually having this impact on our paint metrics, or I guess our start render, and has quite a bit of Java of script execution. We know that this is likely to have a pretty big impact if we were to get rid of this, the other is actually this JSAPI request. It is blocking render, blocking, and it results in this redirect that we know is expensive. And it redirects to if we click the response tab, loader JS, which is this one here, (31:00) so we are actually paying two connection costs, we connect to, and then we connect to GST as well.


So we pay the connection cost twice and the, and it's sort of, you got that Daisy chain event. Like we have to wait for that redirect and then have to go out and make this other connection. So that looks pretty expensive too, even though there's not much in the way of a JavaScript execution there, I'm not seeing any, a pink here. We didn't really, it wasn't at the top of the list at all down below. So it seems lightweight from an execution perspective, this is more likely to hit our, blocking display of the page thing.  

So couple things, first off, anytime we have a third party resource that is also render, blocking wherever you see that orange thing. And it's a domain that is not our first party, that's a single point of failure risk. What that means (32:00) Is a single point of failure is anytime we introduce a third party in a position where if that third party fails, if that third party hangs, if it just is really slow to respond, our site's performance pays the cost because we cannot display the page until that third party, completes, the whole request process. So if we go back to the homepage, we can demonstrate this by using the spot tool, we'll drop in a host in this case, the Adobe one, and we will kick off a test.  

This is going to take a while to run. So we'll do other things while this is going. What this is going to do is it's going to kick off a test, as normal with everything working just fine. And then it's going to kick off another test where it's going to route any request to this host to black, which is a domain that just never returns a response. What we're trying to see here is how big is the risk? (33:00) Like if worse comes to worse, this domain hangs, we have a problem. It doesn't res-- they're having issues and it doesn't respond like what is the impact on our site's performance when that happens? So we'll let that run in the background and we can see the result in a little bit.  


The other thing that we have available to us is scripting behavior. Actually we could do it. I'm going to use it the script way. There's technically somewhere in here, you could do the block tab. block lets us like for example, block a request. We could really, either one would work either based on URL substring or full hostname, but there's one other feature I kind of want to show off as part of this. So we'll do that first. So if you go to script, the first thing we can do is run this test with one of those third parties blocked.

So let's say we're going to navigate it always needs a navigate command. We're going to navigate to our URL and this will fill in for the URL. That's here in the URL box. (34:00) I'll zoom in a little bit. So you can see that a little better. And then we're going to block domain. He says, he should know. He now I'm talking in like the third-person or whatever. Oh no, this is [Inaudible34:19] scripting. I should know this. I use this every day and I still forget. We have block domains is what it is. Okay. Yep. So we can do block this domain. We could actually provide a list of all the different third parties if we wanted. But in this case, we are going to just block the single one. So here's how we might test then a situation where a particular, third party is just not included in the page. So now this isn't the same as the spot test is not going to hang. This is going to completely exclude any request to that domain in this case. (35:00)

The other thing we could do is we could run a test and see like, what if we didn't do third parties at all? Which of course is a complete hypothetical, but this is actually useful because there are times where if you're monitoring or you're trying to measure the impact of, purely your code changes. Sometimes that third-party stuff is noise. Its noise, that's important to pay attention to, but sometimes it can also get in the way of things, or, sometimes, yeah, maybe you are legitimately trying to like argue like we should just not use third parties.  

We can do that by either providing a list of all of these different domains, which we could grab easily enough by looking at the connection view, right here, we can see that we're connecting to and then we connect to these other domains. We might want to block all of these, for example, or we cheat (36:00) and go the easy route and use a different script command. We're going to grab our, go back to our here. And we're going to change this to block domains except, and we're going to drop in our, we could actually drop the URL or the actual hostname that we want to not block, but let's be fancy and use substitution.

What this will do is this will take the URL that's up here, figure out what the origin is, and then pass that along for the block domains except. And I think that should work. So this pattern navigate URL block domains except should block all domains. So no third party kicking off a lot of tests. (37:00)My test history is going to come in handy here. All right. So let's go back, I think spot should be done right now I think I saw that. Here’s our spot view, so remember in this case, what we said is let's pretend that Adobe fails. Like it hangs, it never responds. What's the impact on our site's performance?

So at the top here, we have our original and at the bottom, we have the film strip of the single point of failure test. And you can see that if adobe just hangs we don't start anything onto the screen until around 33 seconds. Interestingly enough, remember how I mentioned that like you'd set the opacity to zero and often these little snippets have like a timeout. Like if experiments don't apply within four seconds, then display the page. Well, okay. Never mind. (38:00)Oh, maybe no, never mind. As I started to talk that through that doesn't actually make sense.  

We have a render-blocking script, so it is going to block the display of the page, regardless of if there's that time out or whatever in place. So ignore me on that one. This is what you get when you live walking through stuff. Yeah, but like the difference is night and day here and what we could do to demonstrate this to the business or to anybody who, not everybody cares about the metrics. And that is shocking is that is to say as a perf person like, that's not what appeals to people. That's not what always moves everybody gets them excited or to understand like the impact of much more visual, a much more, readily like approachable way of demonstrating the impact here is this video view. So here we can say, for example, generate, a video of the two comparisons side by side, and we could present this to anybody (39:00) regardless of their knowledge about performance. And they're immediately going to see that, yeah, this is bad.  

Like we are encouraging a very significant risk by taking on third parties in our, primary and our render-blocking path and that critical path, because if something hangs, we are potentially looking at a significant, basically an outage on the site for the most part, how many people are going to sit around for 33 seconds? So this video or you could grab a gif of it or whatever, and share it. That's a very impactful way of showing the impact there too. So those spot tests are really useful to be able to see, like, what is the potential issue here? If something does go down and how can I show that to somebody else to demonstrate the risk that we're incurring?

Oh, I killed this one. Which script did I use to kill it? Did I say, is it block domains except? (40:00) Let's find our docs, it was block domains except, or maybe the origin substitution didn't do exactly what I anticipated here. Something broke on this one. Oh, good call. Oh, Hey pat. I didn't know you would be here. Nice to see you buddy. Maybe it is host you're right, I think you're right. Let's change it to a host and give that a shot.

While that's running, though. We do have our block domains where we blocked just the Adobe one. So let's look at our test history again. We're going to grab no Adobe and we're going to grab our original and compare these two. (41:00) All right. So here's our film strip view again, 0.1-second interval. We are getting stuff out on the screen much, much earlier because we don't have that render-blocking script. We don't have the execution. We also don't have the opacity change happening. So we're getting stuff out on the screen, at 1.9 seconds versus the 5.5 or whatever that is a sizable impact, by just one single script here that we're blocking out. And I don't think, oh no. Did we do Dev tools, throttle, and comparing the wrong one here. My goodness. Did I keep dev tools Throttling checked? Amateur. Here we go let's pull that off. That'll give us more. I was going to say that is dramatic.  

So this is our block domains, except so no third parties. This is our get rid of this (42:00) was block domain. Okay. So that's no Adobe. Now we're going to go back to our test history and clean up some of those icky ones because that wasn't great. Basically. I kept the dev tools throttling on. So everything was better because like we talked about those connection costs and stuff are just not being paid in those scenarios. So we're going to basically clear out all of those, I think. Yeah. Except for the last two, we're going to get rid of those and hide those. I like that new that's new, by the way, the hiding things is pretty new and I'm very, very happy that that exists.

Yeah, no, I forget too, Sometimes we need to, I think one of the things that we'll be working on is the ability to (43:00) retroactively add labels, to some of these tests, which I know is, a work in progress. But, I forget too, or I do like what I just did. Like you run the same test with the same label, under different conditions accidentally. So that's why the hide is very helpful. But yeah, labels are awesome. Here we go this is better. Let's compare now. All right that's better. I mean, it's not better in terms of performance, but it's better in terms of realistic. So this is our no third-party test. Let me just make sure I've got that labeled, right Yep. And that worked this time.

Perfect. Yeah. So this is the one. So it was host versus origin. Pat's call was correct. Turns out the person who built webpage tests, knows what he's talking about surprise, surprise. (44:00) it was they're dropping the host instead of origin. So in this case, what you can see is it actually, if we look at the waterfall, it is all just All of those third parties are gone. We look at the connection view, we're just connecting to that primary domain, that's it. So if we wanted to look at the impact of third parties versus, no third parties, we can see that we do shave off about 2.5 seconds, off of those metrics off of the, start render time, get things out a lot faster. We finish our loading process a couple of seconds earlier. Yeah, it's just, the whole experience is much quicker.  

In fact, we can get a nice detailed comparison here. No, we're starting to get into the danger zone with the number of tabs here, but let's grab the test URLs for these two. I'm going to grab this one, which was our core. (45:01) We’re going to go to, Matt you're in the stream. I think I saw you take a bow. This is built by Matt Hobbs. We will grab that test and then we will grab the other one, which was the no third party test URL.

And what this is going to do is generate some quick links for us. We've looked at the film strip comparison. The other one I really like here is the graph page comparison view. This lets us get nice and detailed. So we're going to compare these two tests we're going to compare against, should I use a nice little label there? Which one was our ID for the initial here? That's this one, right? 62. Yep. Okay. Let’s just add the labels here for better. This is (46:00) our base and this is our no 3rd.

Now let's generate now let's do graph comparison and take a look here we're going to compare against our base and we'll start at zero just so all the charts look nice and familiar. And here we go. So what we're looking at here is each individual run of these tests compared against each other. So we can see for all of the different metrics very quickly. How does it change by taking away third parties? Is it statistically significant of a difference or is the difference kind of minor? And also what does it do for a variability perspective?  

So for example, first contentful paint looks like it's, a statistically significant improvement, quite a bit of improvement. And if we look at each individual run, we can see that our base here, (47:00) the red dots is consistently slower. It looks like the no third parties might be, they're pretty close in terms of the actual variability here.

This, by the way, is actually, in progress so soon, to make that a little easier to cut the dots there. We can see the impact on largest Contentful paint. Certainly less variable without the third parties. Like when we had the third parties there in that base comparison, you can see we had a few spikes, but we're pretty stable. Once we get rid of all the third parties, that's one of the other things around the third parties. It's not just the impact on the metric, but what does it do for the stability of that metric?

Now interestingly, here's where we're getting hit a little bit. We are paying a cost it looks like for our cumulative layout shift. We are getting a higher CLS score without third parties now (48:00) probably because those third parties are one of those was the Adobe tag manager, which hides the display of the screen, for the first, however many seconds. So any shift issues that are underlying in the page, we don't notice because they are kind of tucked away and, hidden away from the user. And once we pull that out, once we get things rendering quickly, the shifting becomes more obvious. So we get a lot. So we'd have to address that CLS perspective here.

Total blocking time again, statistically significant, although there's still some variability, which is to be expected. A lot of that is first-party, JavaScript JQuery things, but you can see how this starts to give you a little bit better sense of what the actual overall impact is in terms of variability and across all those different networks. It's a really powerful comparison view for, when you're trying to see the impact of things like this or experiments. (49:00) We should have our no Adobe one let's look that like Adobe by itself and make sure that worked.

So this is the one again, where we navigated and we just blocked the Adobe tag manager. So first off start render and first contentful paint are right in line that's good. We got rid of that problem there. If we click on our media run and click through, you can see our start render and first contentful paint right next to each other. We’ve got the improvement in total blocking time. It went from 2.2 or whatever it was to 1.7. Here's that CLS spike that we're talking about from before. But again, we've seen, if we did our comparison view on these two, we've seen again from no Adobe and Adobe, certainly getting stuff out on the screen much quicker.

(50:00) Let's take a look at what the shifts are, because I'm curious. So I click the highlight layout shifts option here, just to see if we can highlight them in the film strip view. That's one way of looking at it. Yeah. So the problem with this is that if we don't record the filmstrip at 60 frames per second, which we don't by default on desktop because there's a big observer effect when we're recording that fast, at 60 frames per second, it slows down your test and the accuracy of the results. So we tend to space those film strip screenshots out a little bit. So sometimes we can see where the shift should have should have been occurring, but not necessarily lines it up perfectly in the thumbnails themselves. So there's a little bit of a like we can see the area and we can probably anticipate where some of that shifting came from. But it doesn't highlight it as cleanly as we would if we were capturing this on a real mobile device versus an emulated one.

(51:00) let's try the web vitals view. See if there's anything that clues us in over there or no Adobe, cumulative layout shift. Here we go. So, I love these little gif jiff things. These are handy. Oh yeah. Look at this. We've got a whole bunch of like, is recombobulation a word because that's, what's coming to mind. I don’t know if that's a word, but there's some interesting stuff happening here. So this is the after this is the previous when I hover over it and it looks like the previous has this banner up at the top, the after it gets shuffled under the header and the search, that's doing some weird stuff there and there's a little shift that's--

Yeah. There's a bunch of interesting fallout here from moving this content around, (52:00) whatever is being passed down here. We're manipulating something on the client-side to impact that display. Shifting those different pieces of content around, which is causing our problems here. Looks like we might be just grabbing that, extracting it from the Dom, and then shoving it in elsewhere. So that's something that we could probably fix, through a combination of, I mean probably through CSS, maybe serving-up slightly different markup at the start, but my guess is this is mostly a styling-related thing. And that is 1.00 of our 1.05. So we fixed this shift and CLS goes down to basically zero. So yes, by pulling out Adobe, we potentially incur this CLS score at least initially, but it also looks like that's a pretty surmountable, like a pretty fixable thing.

(53:00) So yeah. So the spot, so again, 3rd party, just to summarize where we've kind of went through, we identified potential some third parties that were causing problems. We can use the spot thing to see what happens when things go really bad. We can use block domains, and block domains except to help us, pinpoint like what's the impact of this one-third party what's the impact of all of the third parties, that kind of thing. And we also talked a little bit about the tooling thing just, to keep that in mind, like how things are measured. Like when we talk about that's where that start render value comes really, really valuable as a way to sort of fact check the first contentful paint. And also just understanding like from the actual measurement side of things, like how the throttling can impact our decisions on third parties too. If we're not aware of how that throttling is being applied to third parties, it's entirely possible that we're going to underestimate the significance of a performance issue. (54:00) So, yeah, hopefully, those are a few helpful tips, stuff like that.  

We're about three minutes away from the HTB archive stream. So I don't necessarily wanted to get into, something else, right like in the weeds yet, unless folks kind of want to hang around for a second or two, but anybody got questions, anything that came out. Otherwise, like I said, I think HB archive is starting in just a couple minutes and it could be on fun to, I don't know, like there's a thing that Twitch people do. You raid. How to do this. That would be fun. I would totally raid their stream, which I think is like, basically it's like, Hey, we're watching this, this looks like a cool stream. Let's jump everybody over automatically and then y'all can leave and do whatever you want. But like, I actually don't know how to do that. I'm not hip with it.

(55:00) I also know there's some bad raid stuff too. Like that's a whole thing, but like we're good people. Going for like, this is a legitimate one. Oh, here. So maybe we can do this. Should I try it? All right. Nobody laugh. I'm going to try to do a thing. It looks like I can do it right within the chat. I think we might have to wait until they're live. I assume you have to. So we have to wait like two minutes. So we need like two minutes of stalling as long as Rick is on time. And then we'll try to jump everybody over. So it looks like I just type something into the chat and it sends all of you over. So if it doesn't work, nobody laugh at the fact that, I'm chatting random, weird things.

(56:00) Yes. This is what I'm going to try. And we'll see how that goes. What's the worst that can happen? Like the worst that can happen is nothing. Right. And then we just leave and then we have to manually go over there and it's okay. And it's everybody lives and survives and everybody's happy. Do we have any HTB archive? Like, are there any of the contributors or folks? Okay. That's insulting. I am insulted. Yeah Hi Matt, how do you kick somebody out of the no I’m just joking? I would never boot you, buddy. That’s harsh though. (57:00)  

I lost where I was going now, now that I'm in, I just got called out for being an old man. My, my wife commented apparently that there are, gray hairs becoming very obvious in my hair, but I disagree. I tell her it's the sunlight and the way it hits them. And that's also why I'm standing further back, just in case. But you're not the first person to [Inaudible57:27] me in the last day or two on that. I was going to say, oh yeah, that's what I was going to ask. Is anybody in this group like the HTB archive contributors? I know sometimes we get folks who are like working on HTB archive and stuff like that that are sitting in the chat. I don't know if Sai made it on today or not, but she's actually writing the performance chapter for the web Almanac as well. So yeah, it's a thing.

HTP3. What do you want to talk about? (58:00) It's interesting. Fair. Yeah. No, so HB3 like, I haven't has anybody, I don't know if anybody's had a chance to look in. There's been a couple papers interest like that came out recently. They were at seeing something about the extent, the initial gains or performance wins that a lot of people thought were going to be present in the protocol. Doesn’t sound like they're quite holding up. I haven't had a chance to read through those myself. But, I would say just generally, like having gone through the HTP2, thing, it's always kind of interesting, to dive into that stuff. We tend to draw conclusions, I think too early on some of them.

You start off with sort of this, this like, oh, it's going to be awesome. It's going to be a big game-changer. Then eventually those expectations start to get reeled in. (59:00) And we start to realize it's probably more incremental, and there's probably some pros and cons that people were overlooking until it hits real world. And then the other thing is stuff like this. Pat's mentioning that the prioritization is still in [Inaudible59:15] H3. There's a lot of H2 prioritization that we don't even have, like a lot of stuff that serves up H2 that doesn't get prioritization right or messes up different pieces. And it used to be even worse. So like, we have to be-- it's dangerous to draw too many conclusions in the early Greenfield stage of a new protocol like that, has been my experience. It's just, there are so many moving pieces, that all impact the actual impact performance. And like whether or not it's a big improvement or not. So drawing conclusions, this early always feels kind of risky.

Yeah. The recommendation here for Matt about, Robin's post is, very good. I'll grab you all the links. 1:00:00 it's a light read. It's a light, 30 minutes, I believe it is, is the estimated reading time of this, but it is it's. It's good. And this is only part one. Robin always goes into great detail. It's very well explained. It's if you want to learn about a jump on here, it's a great place to start. All right. So let's try this raid thing. See how this works.

All right. All right. All right. Fingers crossed. It feels dangerous. It's doing something. It didn't immediately submit the chat. I see a whirling thing no, one more try. Yeah. Thank you everybody for jumping on. I really appreciate it. It’s always, fun and it's always fun to have the comments too. And like the chat or I think that's part of my favorite parts there. Ooh. It seems to be complaining to me. Let's try one more time. Oh, is it counting down to the raid? It's telling me that there's an error. Oh, well, good. Awesome. It's glad it says it's raiding power user. Yeah. I'll take it. Not a clue what I'm doing. It's fine.



Site Speed
Core Web Vitals
Website Performance

Sign up for a FREE WebPageTest account and start profiling


Tim Kadlec​
Director of DevEx Engineering