It may be more okay to look at averages in web analytics when you’re judging site and page performance from a technical perspective in which there are industry-set standards and best practices that can guide your benchmarking for say, such things as page load times (yet even here, server and user locations will factor in). But it’s really a crime to benchmark based on averages you see across your entire site (like average time spent, average number of sessions, or average bounce rate). Google Analytics loves to serve these up in every view and in nifty dashboards – but let’s be weary.
Just because your banana smoothie tastes mostly like banana, it’s not the only ingredient that makes for a tasty beverage. Or something like that.
But I digress.
Make Quantitative Analysis More Qualitative
When looking at editorial content in particular, ensure that outliers in your data set (whatever you determine those to be) don’t skew your understanding of typical performance.
Likely, your pages have different conversion points and roles within the user journey. Some examples:
- Pages with longer-form content that ask them to engage vs. pages with strong CTAs that direct your visitors elsewhere (off and on).
- Content that’s highly technical vs. more easily-digestible content formats such as infographics or listicles.
The KPIs attached to each will differ greatly based on
- What you want users to do on that page or set of pages
- How well you’re achieving that aim.
I suggest taking these steps when trying to determine the benchmark or “typical performance:”
Choose Your Data Sets Carefully
Note differences in how various areas or pages of your site are used within the full site ecosystem and then hone into them separately to ensure you’re not comparing apples to oranges (or vanilla to bananas). You can even go a step further and tag up pages according to their “content grouping” in GA (this will be case by case), if the audiences for or functions of your pages vary greatly across your site. For instance:
- Articles vs. Download pages
- Evergreen blog posts vs. News articles
- Women’s Health vs. Old Age
Get Rid of Outliers for the Chosen Data Set with Filters
Make sure that all the data points in the set you’re looking at are representative of the typical page you’re trying to evaluate. This means getting rid of outliers.
Of course, by way of filtering (step 1), you’ve already gotten rid of general outliers based on more general assumptions about user behavior based on what your site offers. Now it’s time to think more deeply about how users vary, too. View your data by landing page or filter the subsets by source to ensure traffic type is more similar. Be wary of the following which may be red flags for what I’ll call “hidden outliers:”
- Paid campaigns will bring users to your site who may not know your brand or may bring in more new users (who also may not know how to use your site yet).
- Certain pages that are buried deep inside your site may require more pages per session to view.
- Note that sites with too many landing pages with query strings (URLs with “?”) will skew performance as visits to these pages do not result from typical activity or navigation within your site. These pages typically result in lower sessions and carry too much weight in the calculation of averages – unless they indicate campaign referrals (another thing to note to filter for, perhaps) or error messages (something to definitely note for site optimization).
(Now, we may be onto something…)
Use Median Rather than Mean to Identify More Accurate Benchmarks
Rather than looking at the averages for your filtered view (as you can see hidden outliers can skew that number), it may be easiest to download the report and calculate the median instead. Median is not affected by outliers as much as averages are.
You can also do this manually, which takes more work – i.e. taking out even more outliers by filtering further within your data set until you feel you can rely on the average. But I’ll let you choose the superior approach.