How is the survey administered online?
Surveys are administered through our survey collection partners who work through an online platform which houses our entire survey experience. Respondents are invited to take the survey and come from a double opt in vendors. The survey is not Resonate branded and is not a pop up. Additional information in our Methodology overview.
How do you choose to which devices to send survey requests?
We don’t directly target devices. In order to be eligible to take the survey, a respondent has to be in our survey vendor’s panel, over the age of 18, live in America, fit into a non-full quota (age/gender, income, etc), and meet our strict screening requirements on device and behavior data.
Once someone has taken the survey, when could they take another survey from Resonate?
Survey respondents are not re-contacted for 90 days after they’ve completed a survey with us.
Is every survey question asked reportable in the platform?
A question that we ask in a survey, whether that be in the United States Consumer Study (USCS) or in a Flash Study must apply to 4-5% of the population in order to be reportable in the platform.
Based on our approach to surveys and sample sizes, in order for an attribute to be fully modeled we have to find enough people that fit the criteria for that attribute.
Therefore, just because we ask a question in a Flash Study or in the USCS, it does not guarantee that the question will be reported in the platform.
For example, you may have seen an insight from one of our Recent Events Flash waves that was not repeated in the next wave. This is probably because we did not receive enough responses in the next round of survey fielding for that insight to be fully modeled in the next wave. If we are able to gain enough responses for the question in a following wave we will continue reporting on that insight.
How frequently are vertical specific categories asked in your survey?
Vertical surveys are rotated throughout the year and scheduled based on business requirements. For example, if a certain vertical is more important to our clients, we may ask it more frequently.
What characteristics are survey respondents weighted on?
We weight on the following parameters:
- Age/Gender composite
- Presence of Children
- Registered to Vote
- Time Spent Online
Do you capture mobile browser/mobile app data in your behavioral database?
We capture desktop and mobile behavior in our behavioral database.
Do you capture search activity from our behavioral database?
How do your models work?
Our models estimate the probability that a device on the internet is operated by someone with a given attribute value.
Under the hood, our predictive models assess whether the presence of some feature (website visits, NLP category visits, etc) increases the odds of the person behind the device has the attribute value. We retrain our predictive models as new research data becomes available.
How do you build an accurate model?
There are several steps to creating accurate models:
- We start with psychometrically guided survey question structures
- Exhaustively examine all possible survey response patterns to identify quality survey respondents
- Execute advanced models on a 5-fold cross validation with 20% hold outs, leveraging an L2 regularization to correct for complexity
For updating models, we randomly hold out 20% of the data for each question. This means that none of the calculus that tunes model parameters even knows these responses exist; they are purely for internal validation. The hold out set was already surveyed, we know their responses, and they passed our survey data quality QA.
Note that this hold out set has nothing to do with the survey design and audiences that are created from survey data have no random hold out set.
How do I know Resonate models are not modeled after non-human traffic?
We'll sometimes get asked how we know our models aren't modeling after non-human traffic, or bots.
The answer lies in how we collect our data.
In order for us to model a device, we need to observe that device over a 90-day period. During those 90 days, we have strict thresholds in place regarding the number of distinct domains and content categories observed as well as the patterns behind these behaviors. If these thresholds are met, the device is identified as a human and becomes eligible to be modeled.
In addition to our strict thresholds, our behavioral data firehose uses industry standard validation tools to filter out assumed bot traffic.
So, at the end of the day, because of the length of time, variety and quality of behavioral data that we observe for our models, you can rest assured that when you target and measure with Resonate, you're targeting humans, not bots.
Why does the projected audience size change depending on what attributes are used in an audience definition?
We are predicting the presence of an attribute and every attribute across our cookie jar. When OR statements are created when building an audience, the Resonate platform will first try and accomplish that by looking at waves where all “or”s are present. If the audience that creates is above the existing threshold then it will present that audience, if it isn’t then it will look at imputed data across all waves and present that audience. This doesn’t directly relate to cookie modeling.
Are there attribute values that can not be activated?
Yes, the following attribute values cannot be included in an audience for activation or engagement. This means audiences that contain these attributes cannot be activated or targeted as an audience for media delivery, and they cannot be included in a data append.
- Transgender Identity (sensitive area for targeting)
- Sexual Orientation (sensitive area for targeting)
- All appended 3rd party data.
- Low Sample Audiences (a great workaround is to combine several attribute values in a single audience to get better sample).
Attributes that cannot be activated are flagged as such in the Audience Builder in Segmentation Center with an ! icon.
3rd party appended data cannot be activated due to our contract with our 3rd party data provider.
What is Urbanicity and how does Resonate Define it?
Researching and understanding regional population, economic, and other trends are often studied based on metropolitan and non-metropolitan areas, as defined by counties within the US. The characterization of counties into metropolitan and non-metropolitan classifications is sometimes referred to as Urbanicity.
Resonate reports on Urbanicity based on the USDA’s Rural-Urban Continuum Codes that distinguishes metropolitan counties by the population size of their metro area, and non-metropolitan counties by degree of urbanization and adjacency to a metropolitan area. The USDA classifies counties into three metropolitan and six non-metropolitan categories with each county in the U.S. being assigned one of the 9 codes. This definitional scheme allows researchers to break county data into finer groups beyond metropolitan and non-metropolitan, particularly for the analysis of trends in non-metropolitan areas that are often related to population density and metropolitan influence.
Resonate’s reporting of Urbanicity is rooted in an understanding of the location of our survey panelists. While we do not directly report on this location, this serves as the basis through which we map the population to counties across the country. These counties are then classified against the USDA’s Rural-Urban Continuum and reported as Urbanicity within the Resonate platform.
Does Resonate Report on sample sizes?
Resonate does not currently report on raw sample sizes. Resonate leverages a predictive methodology where we impute data for those respondents who may not have been exposed to a particular survey question. The Resonate platform leverages this imputed data when it may be required to build a particular audience, or facilitate a particular insight. Due to this dynamic use of imputed data, displaying a single sample size may be misleading in what it represents and as such we don’t display a sample size at this time.
Please sign in to leave a comment.