A 3 step framework for measuring attribution from digital and traditional campaigns
As more and more startups find themselves squeezed by Facebook’s increasing CPMs and unpredictable performance, they’re looking for new channels that can drive their growth. Digital opportunities outside of Facebook exist, but have their own issues, so many marketers are starting to transition spend back to traditional channels like television, radio, and out of home. (Here’s Bloomberg talking about the huge boom in startups advertising via subway ads, for example.)
However, many of today’s startup marketers grew up on digital and don’t actually feel comfortable expanding into these channels, no matter how effective they might be. They’re ‘traditional’ for marketing as a profession, but not for the average marketer running a small team at a DTC startup. In a lot of the conversations I’ve had with fellow marketers about this topic, one of the reasons for the discomfort is that they worry about attribution — specifically, how to know if your offline channels are performing without the beautiful dashboards and highly deterministic (if sometimes inaccurate) attribution provided by FB and Google. The impact to their business’ bottom line can be significant, as they continue to invest in channels that are easier to track even as performance on those channels steadily degrades.
How can startup marketers feel more comfortable investing in non-digital channels? I’ve found that there are some straightforward factors to consider if you want to build a system that presents a single source of truth and lets you scale your business through both digital and traditional channels.
To help marketers working on this problem, I’ve outlined my experience marrying online and offline channels into one attribution system. This guide describes three core parts of the process, and breaks down the technical bits in a way that I hope is easy to understand for the technologically-curious marketer.
How to capture digital attribution data in your own database rather than relying on the data on Google analytics or the Facebook and Google AdWords internal attribution tools.
How to capture off-line attribution through user surveys, coupon conversions, and custom landing page URLs.
How to marry all of the data in your own internal data warehouse in such a way that makes it possible to combine the two data sets into a single view, with what I believe to be a set of reasonable assumptions that should be a good fit for most startup marketing teams.
The end result is the ability to look at all of your purchases and see which combination of channels generated your sales this month, as well as the ability to look at a specific purchase and understand which channel deserves the most credit for it.
Step 1: Collecting traffic data
This section lays out the steps that you can take to collect web traffic data in a way that it can be queried for attribution purposes.
This is the data about your web visitors that will help you better understand how they came to you:
Timestamps - when did they visit you? (Your engineering peers may be used to capturing activity in UTC, the time zone that many servers use by default, rather than in your local time zone. Make sure to confirm which time zone your data is in.)
The page visited - what page did they visit?
URL parameters - did they click on a link with campaign information?
Mostly utm parameters you use to tag your links with campaign and channel
You’ll need to parse these out at some point, but it’s better to just store the whole visited URL for now and figure out how to parse it later, so that you can keep that logic somewhere it can be more easily accessed and changed by marketing.
External HTTP referrer - what page did they come from?
The HTTP referrer records the previous page somebody was on before they clicked on your link. (Funnily enough, it’s actually more correct to call it the HTTP referer, because of a misspelling when it was originally introduced to the HTTP specification)
You can see it yourself! Open an incognito window, then search for your website in google. Click on a google result, and then open the chrome console. Type document.referrer (yes, with two R’s for some reason) in the console and hit enter. You should see ‘google’ as the referer.
Once you imagine somebody traversing your website, you’ll realize that it’s not very useful to know that the referer to e.g. /pricing/ was your homepage. Therefore, you should capture the most recent external referer, or the fact that there wasn’t one (which is Direct traffic)
A User ID that represents either an anonymous visitor, or (if possible) an identified user of your product - do you know who they are already?
The above steps help you understand where your visitors are coming from, and how to store that information — but what about collecting it?
How to gather site visit data
The best way I’ve seen so far is to buy a tool. There are dozens of tools that are commonly used for this. Segment is a popular example, and it comes with some other handy features as well. The benefit of buying a tool for gathering site visit data is that you don’t have to figure out any of the nuances of deciding how to store this data, or how to reconcile data that looks confusing. (For example, many tools offer the ability to collapse down traffic that comes from a single device on a single IP into one “person” in your database— not necessarily a trivial task.) The downside of using one of these tools is that they are not cheap and that they are more easily blocked by those with privacy extensions.
Another approach is to capture this data server-side, meaning that your company’s server records the data every time somebody visits your site. This requires some engineering work around how to capture the data, process it, and store it in a scalable way.
If the first two options aren’t possible for some reason (or if it would take your engineering team a very very long time to get them done), you can still do something! For a simple approach, somebody writes a short javascript that stores a cookie on your visitor’s computer with the data above the first time they visit (if you’re running a first-touch attribution system; more on that in a sec). Later on, when the visitor makes an account or a purchase, you read this cookie back into your database. There are some downsides here (especially at Safari’s ITP anti-tracking updates get more and more robust— see this article and this in-depth technical guide for more information), but for now it’s still better than nothing and I’ve seen it work when resources were tight.
For app installers, the data is similar. You’ll want to partner with a Mobile Attribution Partner like Adjust or Appsflyer, and collect:
Timestamps
URL Parameters that encode your campaign and channel
After collecting this data, you’ll have multiple rows of data for every visit that somebody made. You’ll need to do two things:
First, determine which channel and campaign drove each visit. Do this by looking at the UTM data and referer data you get for each visit, and putting it in a bucket. A few examples: traffic comes from Google and has UTMs on it? That’s a search ad. Traffic comes from Google and doesn’t have UTMs? Probably organic search! No referer or utm? That’s direct.
Second, flatten the multiple visits down into one row per customer. In order to do that, you’ll need to decide whether you want to use a first-touch, last-touch, multi-touch, or time-decay attribution system, so let’s talk about that for a bit.
Picking an attribution model
The difference between first-touch and last-touch is that in first touch, you are assigning credit to the channel or campaign that you identified the first time somebody came to your site. For last-touch, you assign credit to the channel or campaign that drove the most recent visit before purchase. A multi-touch system finds some sort of compromise that divides credit between the first and last touches, and sometimes between any additional visits that happened along the way. A time-decay system is a combination of one of the above, plus logic that says that if a visit happened a long time ago, it might be ignored or counted less than one that happened more recently.
Some of your channels may already have components of time-decay systems built into them: by default, Facebook’s default attribution window takes credit for actions that happened within 1-day of viewing your ad or within 28 days of clicking on it. However, because this model is based on the data that gets passed to your servers when somebody visits your site (or downloads your app), those attribution windows are usually not accounted for. This is an important way in which this model will differ from your ad platform’s attribution.
Many very smart people have written blog posts about these different approaches, occasionally dismissing anybody that would dare use a simpler attribution model. I think that early on, it matters less than you would think. For one, a huge number of your visitors are probably only hitting your site once! If you’re pressed for time or resources, and if it’s very early on for your business, just figure out what campaigns are driving new buyers to your site. As you get more sophisticated, start looking at last touch, and then a blended and time-decay model.
Have you decided on first-touch vs last-touch? Great – record that for now, and we’ll come back to it once it’s time to flatten the data and marry things up with offline attribution data.
Step 2: Collecting offline attribution data
This section contains the steps the marketer can take to understand how their off-line channels drove purchases. The main focus is on the post sales survey– also known as the “how did you hear about us” survey. We also discuss custom URLs and coupons and some of the limitations of using these to determine attribution.
How did you hear about us surveys (HDYH) are crucial for understanding the performance of offline channels. Yes, directly asking people how they heard about you produces a different type of data than digital attribution. A survey overweights channels or campaigns that have more salience than others. All in all, I’m not convinced this is the worst thing in the world, and in my experience most people that implement HDYH find this data very useful.
Some marketers look for the easy way out, and try to send HDYH surveys via email. I think this is a huge mistake. Putting the HDYH survey directly in your funnel, somewhere in the checkout experience, leads to dramatically better completion rates – it’s not at all unusual to hear of 80%+ completion rates on a well-placed survey. (Some marketers with lower completion rates extrapolate their results to cover the blank responses in their survey. Unfortunately, that doesn’t work well with a system like this one, which needs a specific data point for each purchase.)
Once it’s in your funnel, it can be helpful to test the copy that introduces the HDYH. Examples range from something simple, like “How did you hear about us”, to something more personal, like “It would make our marketing team’s day if you told us how you heard about us”. Besides any brand considerations, keep in mind that more personal approaches tend to lead to better conversion rates. On the other hand, more personal approaches require more room for text characters, which may cause UI/UX issues.
Additionally, you can test presenting the survey to visitors at different times. I’ve always advocated for placing the HDYH as close to the point of conversion (e.g. directly after the sale or the account creation) as possible. It feels like a fair quid-pro-quo for the user, and it leads to high completion rates because the user is already in a “conversion mood.” One differing option I want to call out though is minted.com. When you visit their site, they ask you how you heard about them way before you convert, in exchange for a coupon. Pretty clever, and the fact that they ask earlier in the funnel may mean that they get better quality responses from their customers (and responses from many more visitors than just those that convert!)
It can be very useful to randomize the order of the responses, so that you don’t get bias from people who always select the first option.
Some people include new channels they are testing a few weeks before a campaign goes live, to build a baseline of how many people (mistakenly) think they’ve seen you on that channel.
For channels like Podcast, it can be very helpful to gather not only that people heard about you on a podcast, but precisely which one. This lets you attribute your podcast purchases much more precisely, since in my experience less than 15% of people who say they heard about you on a podcast will actually use a podcast link or coupon. People generally like supporting their favorite podcasts – more on taking advantage of this in a future post – so it’s worth the effort to engineer this. You can either have a list of all the shows you’re on (which you update every time you advertise on a new show) or an empty text-area that you then manually classify. Listing all of your shows can theoretically leak your buying strategy to your competitors, but I think that ship is sailing anyways with Google’s new podcast search coming out. Manual classification can be a pain, depending on how many purchases you have a month, but can also be incredibly useful because you see new trends that are otherwise lost in the framework you impose on your visitors. I’ve built some tools that make manual classification easier, which I’ll write about in the future.
How about non-HYDH methods of offline attribution? Those generally fall into three categories:
putting custom URLs and coupons in your advertisements (again, less than 15% of people actually use the code/URL)
using lift analysis (e.g. measuring how many more site visits you got in Chicago after you started running local radio ads in Chicago - important but actually easier to do once you have an existing attribution system spitting out a deterministic attribution result for each purchase)
buying some sophisticated outdoor inventory that tries to match people that passed by your ad with AD-IDS or IDFAs. (I haven’t had any success with this –– let me know if you have)
Again, if you get multiple survey results for every person you’re interested in analyzing, you’ll need to decide how to flatten the data down to one row in the last step.
Step 3: Marrying the two data sets
Now that we’ve talked about collecting site visit data and measuring non-digital channels, it’s time to map out an attribution taxonomy that will answer the business questions you have. The purpose of the taxonomy is to properly nest the specific pieces of data you have under broader buckets that are useful for you. For example, knowing that somebody came in from a paid Facebook campaign, you may want to store the name of the campaign, then also roll that campaign up to ‘facebook’, and then ‘paid social’, and then ‘paid’. That way, you can answer three different questions —”How is this campaign doing?“, “How is Facebook doing?”, “How is paid social doing?”, and “How is Paid doing?”.
Since not every data point you have will be of the same granularity (for example, if somebody comes in on Radio, you may not be able to map it to a specific campaign, the way you can with Facebook), when you implement this hierarchy you’ll also need placeholders for those times when you don’t have enough data to properly assign a purchase to that level. That’s because you don’t want to miss those purchases when you sum up purchases at a level that doesn’t have data for all traffic sources. (The placeholders would map to the cells that have a ‘-‘ below.)
Here’s an example of a tree that you could use, with some common traffic sources laid out:
Level 1 | Level 2 | Level 3 | Level 4 | Level 5 |
---|---|---|---|---|
Paid | Paid Digital | Paid Social | Facebook ad | Specific Facebook campaign |
Paid | Paid Digital | Paid Social | Instagram ad | Specific Instagram campaign |
Paid | Paid Digital | Paid Social | Twitter ad | Specific Twitter campaign |
Paid | Paid Digital | Paid Social | Pinterest ad | Specific Pinterest campaign |
Paid | Paid Digital | Paid Social | Snapchat ad | Specific Snapchat campaign |
Paid | Paid Digital | App stores | iTunes | - |
Paid | Paid Digital | App stores | Google Play | - |
Paid | Paid Digital | Other apps/sites | Quora | Specific Quora campaign |
Paid | Paid Digital | Other apps/sites | Specific Reddit campaign | |
Paid | Paid Digital | Other apps/sites | Tinder | - |
Paid | Paid Digital | Search ad (high funnel) | Google (high funnel) | Specific Google campaign (high funnel) |
Paid | Paid Digital | Search ad (high funnel) | Bing (high funnel) | Specific Bing campaign (high funnel) |
Paid | Paid Digital | Affiliate | Influencers | Specific influencer |
Paid | Paid Traditional | Broadcast | TV | - |
Paid | Paid Traditional | Broadcast | Podcast | Specific Podcast |
Paid | Paid Traditional | Broadcast | Radio | - |
Paid | Paid Traditional | Broadcast | Sirius | - |
Paid | Paid Traditional | Direct Mail | Shared Mailer | - |
Paid | Paid Traditional | OOH | Subway | - |
Paid | Paid Traditional | OOH | Phone booths | - |
Paid | Paid Traditional | - | - | |
Organic | Digital | Search organic | Google (low funnel paid) | Specific Google campaign (low funnel) |
Organic | Digital | Search organic | Bing (low funnel paid) | Specific Bing campaign (low funnel) |
Organic | Digital | Search organic | Google organic | - |
Organic | Digital | Search organic | Bing organic | - |
Organic | Digital | Search organic | Yahoo organic | - |
Organic | Digital | Main Site | Homepage | - |
Organic | Digital | Main Site | Category Page | - |
Organic | Digital | Main Site | Blog | - |
Organic | Digital | App store organic | iTunes organic | - |
Organic | Digital | App store organic | Google Play organic | - |
Organic | Digital | Social organic | Facebook organic | - |
Organic | Digital | Social organic | Instagram organic | - |
Organic | Digital | Social organic | Twitter organic | - |
Organic | Digital | Social organic | Pinterest organic | - |
Organic | Digital | Social organic | Snapchat organic | - |
Organic | Digital | Refer a Friend | - | - |
Organic | Digital | Web traffic | Press, article | - |
Organic | Digital | Direct | - | - |
Organic | Traditional | Word of mouth | - | - |
Organic | Traditional | Press, article, book | - | - |
Organic | Traditional | Event | - | - |
There’s one important decision in the tree above that you may or may not agree with, so I’m calling it out:
Branded Search is under ‘Organic’. Even though you spend money on branded search, you usually can’t spend more money on branded search and scale the channel efficiently because more spend on your brand terms can’t generate more searches for the same – the relationship is in fact inverse, where you’re able to spend more as brand awareness grows. Misunderstanding this relationship leads to ridiculous questions, like the time I received the following question during due diligence: “It looks like your branded search channel is producing sales at a great CPA. Why haven’t you moved your entire marketing budget into branded search?” Slotting it under ‘Organic’ is a better measure of what is going on. As you spend money on other channels, brand awareness grows, and more people search for you online.
After you’ve finished creating your attribution hierarchy, you should make a set of rules that determines where a particular order or user falls into the hierarchy. The first step is determining what those rules should be (which is best done in excel), and the second step is figuring out how to implement them in your database (usually a python/R or sql script).
First, take all of your orders that only have digital attribution (utm tracking, custom landing page, or promo code), and map them to the hierarchy. For example, those orders that come in with only tracking information for a Facebook campaign, get mapped to Paid > Paid Digital > Paid Social > Facebook Ad > Specific Facebook Campaign. Next, take the orders that only have survey attribution, and map them to the hierarchy. For example, those orders that only came in with a survey response saying they came from a podcast get mapped to Paid > Paid Traditional > Broadcast > Podcast > “This American Life”. Finally, take the orders that have both digital and survey attribution, and map them to the hierarchy. Perhaps you just know that somebody came in on podcast, but not the exact show, and you also see that they came in on a branded google search. This is your opportunity to decide what kind of credit the channels get — does podcast get all of the credit, because you discount the branded search? Do you do a 50⁄50 split? Etc. At this point, you’ll need to refer to your decisions about first touch vs last touch attribution, linear vs. time-decay, and so on. When collapsing down your digital attribution data and your offline attribution data to one data point, consider your order of operations – you may want to postpone flattening until after your script has reviewed all the rows you have for each user.
One quick and dirty approach I’ve used is to start off by just assigning equal credit to the digital and the survey attribution results, so somebody that comes in on a Facebook ad and says they heard about us on a podcast results in a 50⁄50 split between the two channels. I like storing the ‘credit’ for each channel as a decimal (so 50% credit is .50) — this makes it very easy to sum up your purchases at the end of the month. Starting off like this makes it easy to adjust the split over time as you learn more about what is really driving your business.
One thing I don’t recommend is creating a waterfall that benefits one channel over another simply because it has digital tracking, or (conversely) because of a belief that a channel is under-represented and needs an attribution boost. This almost always hides what is really going on. If you have multiple data points on a purchase, use them!
Figuring out all of these rules is a ton of work - first because there are a lot of them, and second because you need to think through the individual situations and decide to encode business logic that will have a significant impact on how you manage your spend. One helpful approach to make quick progress is to pull a list of your most common combinations of attribution data points, and then to sort the list so you spend the most time thinking about the most common cases. The payoff is worth it though — each month you’ll have a consistent way of answering questions about what channels drove performance for you.
At some point, you look at your list of rules, and you wonder: should I keep tweaking this? Or is it done? What happens if I add a new channel? That’s the right time to implement the rules in code, so that you can start testing them against real data. Over time, you’ll need to make iterations to your attribution rules, and it can be useful to engineer your system so that you can create new versions of the rules and test the data sets against each other, rather than needing to replace the whole thing each time you want to make a change. (Note the column ‘tree version’ in the table above).
Closing considerations
Where does this method fall short?
There are a few limitations to this method that you should keep in mind.
Depending on how you capture web traffic and mobile traffic, it may need adjustment for funnels that have a lot of cross-device conversion behavior. Think carefully about the possible flows that your visitors take, and make sure that you’re consolidating multiple devices into one identity when necessary.
It does not account for view-through signals. As a marketing program gets more sophisticated and begins to invest heavily in display and similar channels, those views need to be modeled (via holdout groups and matched market pairs, for example) and overlaid on the results of this method.
It is not probabilistic, there’s no AI, and it doesn’t use Machine Learning – this is a fairly straightforward way of looking at the data that is coming in to your business and it takes the data you ingest at face value rather than looking for deeper behavioral patterns that might be hidden within.
Privacy
In as much as it doesn’t involve following people around the web, matching their cookies to identities on an ad network, etc., this is a relatively privacy-friendly way of going about attribution. However, it’s important to note that according to GDPR and (for healthcare startups) HIPAA, you may be collecting data in a way that makes you liable for what happens with it, or what happens if somebody asks you to delete it. You should discuss this with your CTO or with your data officer.
What’s next?
Once you have this type of system in place, you can work with your data team to do proper lift analysis, holdouts, and Media Mix Modeling (see this incredible piece about MMM from ThirdLove, if that sounds interesting). All of these get much easier once you have a universal attribution scheme across your entire purchase database, and they also can feed right back into your model as you find interactions that you didn’t know about or find opportunities to tweak the amount of credit that a channel should deserve (for example, you might decide that Facebook deserved 25%, or 75%, credit instead of 50% in the example above). Eventually, you can progress all the way through to a ML-based attribution model.
Completing the work above gives you a single answer for the question of “where did this customer or purchase come from?” You’ll find that the answers aren’t perfect, but having a system like this gives you something to tweak until you dial in to something accurate. And in the meantime, you’ll probably find some major optimization results, even with imperfect data, that should give you a few quarters or a few years of big wins on CPA and growth– something that is good for your startup as well as your career.
Thank yous:
I couldn’t have written this guide without help from the following people. Thank you for showing me how you do attribution, reading drafts, and helping me refine this project.
- Ben Clark, Nick Lamm, Jonathan Metrick, Mike Baker
Unrelated to the above, I need to throw a link to this on my blog, so here it goes: hearing aids