Ethics was the forefront of the Cambridge Analytica scandal, which is back in the headlines, this time with Facebook being sued for the infamous 2018 data breach. Google is also in the midst of an antitrust lawsuit. That’s just a couple in a string of high-profile data stories recently – so it’s easy to see how data collection is being viewed as a murky affair.
Yet data doesn’t have to be a dirty word – it just needs to be better understood. Reframed, re-explained. Because at the moment, there’s a sizable gap between how people think their data is used, and how it’s actually used.
According to research by The Trade Desk, just 27% of Brits understand that the internet is funded by advertising, while 41% have no understanding of advertising’s role when it comes to generating online revenue.
Since no single sector ‘owns’ best-practice in digital ethics, and there isn’t really an industry-setting gold standard to aspire to.
Yet given brand purpose is increasingly a differentiator when it comes to customers choosing where to spend their time and money, it makes sense that data – which now a bedrock of the marketing and brand-building function – be handled with the same purpose-focused ethos.
Last year’s Edelman Brand Trust Survey revealed that only 39% of people believe tech companies put welfare of customers ahead of profits. Moreover, 41% said they didn’t trust brands’ marketing communications to be accurate or truthful.
It seems the stench from the Cambridge Analytica scandal has lingered, and provided ample cause for customers to feel suspicious of logging their data.
For this reason, the value exchange has to be made clearer. Websites now have to ask for consent when it comes to cookies and so on, but more often than not, it can get confusing from an everyday standpoint.
Following GDPR’s implementation in 2018, just 39% of cookie notices mentioned the specific purpose of their data collection, and only 21% stated who or what can access that data.
Given that most brands profess to be open and transparent in their dealings with customers, it makes sense to tell people whether you’re profiling them, and if so, why.
Tweaking the user experience, fixing the notification as a header or footer, rolling out a box notification – that’s all well and good, but it’s not going to work if the people using it don’t understand the value exchange.
Setting strict practices and starting a transparency board are two simple ways to ensure this gets done. The onus should never be on the user to work out why you’re collecting their data – tell them plainly, and let them decide if you’re worth it.
Data ethics is more than simply informing people of why and how you’re using their data. Data systems should be designed with empathy, always putting the customer first – this means everyone, regardless of race, gender, class and ability. And it’s important we all show humility in respect of our great diversity as humans.
It’s something that’s close to my heart. I’ll never truly know what it’s like to have visual impairments like members of my close family do. What I do know is that their experience of the world is different to mine.
There shouldn’t be excuses for failing to design with empathy – many of us will know somebody who has some form of disability, and many more of us will develop one in later life.
This demographic accounts for 14m of the UK’s population and £274bn of its economy. Whether that’s anything from mild dyslexia to visual impairments or something entirely different, we shouldn’t act as if our experiences are the only and correct ones. That anyone is being shut out of digital due to lazy design isn’t acceptable.
For those of us working in tech, and as data handlers, we have a responsibility to listen to their needs with humility and grace; and to act on what we hear by actively co-creating inclusive brand experiences.
Actually getting real people on board, whether that be for people who possess different capabilities right through to a plurality of ethnicities, genders, sexualities, economic backgrounds – that’s the key to designing with empathy. If you don’t include them in the ideation and testing processes, your digital experience risks alienating them.
There is good reason that for all its brilliance, data fails some people. How it’s collected makes it partial from inception – but this is before you apply subsequent biases in the programming and designing stages of experiences.
We should always consider where the source of the data comes from: data only tells you what’s happened, rather than what’s about to happen, could happen or will happen.
Data is, of course, integral to today’s businesses. It makes decisions easier, diversifies work and enables people to focus on creativity rather than functions. But you shouldn’t use data and AI to solve all business issues. In fact, it can hinder the intended purpose.
Take the recent backlash against Twitter’s algorithm, which prioritized pictures with white faces over black. No doubt Twitter’s intentions were well meaning – to provide a slicker user experience – but it fell short on undertaking due diligence.
It’s why co-creation is so important. Data is often at odds with qualitative information we can collect, but with the right people and processes in place, collection of this information shouldn’t scare users. Talking to people can go a long way to reducing or at best eradicating data biases.
All of this matters in an era of brand purpose. When you have brands like Ben & Jerry’s doing the right thing and standing up for social causes, the assumption is those brands will uphold those ethics through the entire customer journey – and that includes digital. Brand purpose is ultimately meaningless if our products alienate people, which in turn lets down the brand.
Clearly then, the need is there. So too is the wherewithal for marketing to legitimately be a standard bearer for digital ethics – we just need to want to.
The post Marketing can legitimately be the standard bearer for digital ethics – here’s how appeared first on ClickZ.Reblogged 3 months ago from www.clickz.com