What’s the most frustrating aspect of working as a client-side researcher? Some recent posts on Twitter #mrx suggest that research agencies may be the culprit. Well, we all have our bad days. But when working as a client-side research manager I found the attitudes and misconceptions about research among some of my colleagues to be one of the most frequently occurring issues.
First of all there were the general misconceptions about research:
- Market research is just about asking people direct questions.
- We should run some focus groups because that’s what we usually do.
- You have to follow up qualitative research with a quantitative study.
- Why would we want to ask people what they want?
- We should give customers what they ask for.
- 30 qualitative interviews is too small a sample to get any useful feedback.
- Report back to me when you have completed the research (I don’t need to be involved do I?).
- I need some research to tell me what to do.
Then there were the various reasons why we should not conduct research:
- We haven’t got time to do any research because it takes too long to conduct and we didn’t include it in the project plan.
- Research takes too long to complete. You will need to allow me 2 to 3 weeks to approve the budget and a further week to sign-off the questionnaire as I’m very busy.
- We don’t need research as we already know what customers’ need, we just can’t deliver it.
- Our advertising agency ran a couple of focus groups to test their ideas.
- There’s no point doing research because our products are too complex for customer’s to understand.
- I don’t need research to tell me how to produce great creative designs.
- Research is too expensive and we’d rather spend the money testing different creative ideas.
- If we conduct a survey it will raise expectations that we are going to do something about the level of service we offer.
- Our research is not helping us make strategic decisions.
And helpful suggestions about how to design or conduct research:
- Why can’t we do all our own research using online surveys?
- We could get our sales people to ask customer’s what they think of the service they provide?
- I’d like to run some focus groups to get feedback on our new idea from consumers.
- You should only have 3 or 4 questionnaire in a customer satisfaction survey because that’s all Enterprise Cars ask in their questionnaire (I received this challenge at 2 separate organisations).
- Having a “very poor” option in the rating scale is too negative.
- We can discuss how we are going to use the research once we have the results back.
Then there were the reasons for doing research:
- We need some evidence for the board to support the decision that we’ve already made.
- We don’t need to publish the research if we don’t agree with the findings.
- We want to quote a reasonably high satisfaction score in our report and accounts.
And it didn’t always go swimmingly when results were presented back:
- It was very interesting, but it didn’t tell me anything I didn’t already know.
- There shouldn’t be any recommendations in the report as it’s marketing’s role to decide what actions should be taken as a result of the research.
- I haven’t got time to attend the presentation, can you email me the findings.
- The staff knew it was a mystery shopper and that’s why they didn’t follow the normal sales process.
- The mystery shopper probably didn’t understand the explanation that our sales person provided as it’s a complex subject.
- The research was paid for from my budget and so I will circulate the findings.
I would point out that some of these comments and perceptions are thankfully not very common. Some are probably a reflection of the industries I have worked in, financial services and retail. However, even within the same organisation I experienced a wide range of reactions and expectations. The comments above also came from a variety of levels of management, so it would be unfair to say it was only junior or inexperienced managers.
Why do these attitudes exist?
There are various reasons for some of these attitudes, including internal politics and a lack of stakeholder engagement, but here are my main suggestions:
- Not top of their agenda! For many middle and senior managers they may spend little, if any, part of their day getting involved in research projects. When they do get involved there is often little expectation of being engaged in any co-creative process. As a consequence they are not always prepared to commit to the time and energy that some projects require.
- Illusion of validity. This refers to how as people acquire more knowledge and expertise they have a tendency to develop an enhanced illusion of their skill and become over-confident in their abilities to predict the future. As Kahneman put it, they are “dazzled by their own brilliance and hate to be wrong”.
- This may be especially problematic in the financial services sector because it is packed full of highly skilled and numerate professionals. The sector’s senior management is dominated by bankers, actuaries, underwriters, and accountants. With hind-sight this culture of over-confidence seems to have been a contributory factor to the 2008 financial crisis. However, forecasting errors are to be expected because the world is unpredictable.
- Not invented-here bias. People who come up with an idea have a tendency to become attached to it and greatly overvalue the potential importance of the idea. The danger is that they become obsessively attached to their own idea and fail to objectively evaluate ideas from other sources (e.g. from market research).
- Further, organisations establish cultures focused towards their own beliefs, terminology, processes and products. Dan Ariely asserts that the overuse of acronyms, which is rife in financial services, can facilitate this process by giving the impression there is a source of insider knowledge and they enable people to talk in a form of shorthand.
- Research in the media. The image of research in the media is often dominated by opinion polls and ‘surveys’ that latch onto a correlation that may prove to be spurious. This is sometimes the result of poorly designed studies (see The Law Of Small Numbers).
- But also by the cognitive bias that Kahneman calls What You See Is All There Is (WYSIATI). This means that people tend to assume that what is visible is all there is and they will not automatically check to see if there is more than they are aware of. Critics of market research appear particularly prone to this bias as they rarely refer to the many and varied modern research techniques deployed by the industry,
- Shareholder value versus mission-led businesses. Evidence suggests that Mission-led businesses out perform the market by nine times . This may be because there is a clear focus in the business, including a desire and passion to meet genuine customer needs.
- By definition management behaviour tends to be aligned with the core values of such businesses . Mark Earls asserts that such businesses may benefit from our ‘herd’ instinct as people with similar beliefs and values like to align themselves with organisations that hold the same goals. They also tend to value customer feedback because they appreciate how important it is to retain customer trust in the brand.
- Brand values are just words. Some companies, including Aviva, one of my past employers, have recognised that being customer centric is not just the responsibility of the research department. Where the company culture encourages all employees to engage with customers and get involved with understanding customer needs there tends to be a more open-minded attitude towards how research can assist them in their day job.
- However, for this to work it is essential that such behaviours are driven from the top. Customers look for confirmation that management behave and make decisions in accordance with their core values and beliefs. Otherwise brand values are just words that have no substance. Recent financial mis-selling scandals in the UK have shown how this is sometimes the case.
- Loss aversion. Just like the rest of us business people dislike losses more than they value gains. This is compounded in businesses that are obsessive about cost cutting and results in too much focus on the cost of research rather than the benefits. As it is often difficult to estimate the return on investment for a research project this can result in many potentially beneficial research projects failing to get support from management in such organisations.
- Ensuring research projects are followed up and actions, not just results, are communicated: It is essential that any significant research project is followed-up and insights are linked to marketing or business strategy. If this is not visible it can give the impression that research isn’t being used to help drive the business forward.
Thank you for reading my post and I hope it raised awareness of some of the challenges faced by the client-side researcher.
You can view my free Digital Marketing Toolbox here.
You can access links to all my posts by going to the index page here.
- About the author: Neal provides digital optimisation consultancy services and has worked for brands such as Deezer.com, Foxybingo.com, Very.co.uk and partypoker.com. He identifies areas for improvement using a combination of approaches including web analytics, heuristic analysis, customer journey mapping, usability testing, and Voice of Customer feedback. By aligning each stage of the customer journey with the organisation’s business goals this helps to improve conversion rates and revenues significantly as almost all websites benefit from a review of customer touch points and user journeys.
- Neal has had articles published on website optimisation on Usabilla.com and as an ex-research and insight manager on the GreenBook Blog research website. If you wish to contact Neal please send an email to firstname.lastname@example.org. You can follow Neal on Twitter @northresearch and view his LinkedIn profile.