The generational data divide is reshaping trust in AI

Share this article

Data sharing attitudes are being transformed by generational shifts, cultural expectations, and evolving notions of trust. As AI personalisation moves beyond compliance and into prediction, enterprises must balance transparency, ethics, and innovation to succeed.

For years, the question of data was simple: could it be kept safe? Encryption, compliance, regulatory sign-off, the old debates centred on locks and keys. But that narrow view has slipped behind the times. A new split is emerging, and it runs between generations.

Younger people, immersed in digital services since childhood, rarely pause before handing over details. To them, data is currency. It buys relevance, convenience, and a smoother journey through the online world. For older groups, the instinct is different. They carry the memory of security breaches and clumsy data grabs, and the hesitation lingers. They ask what the return is, and they are not wrong to do so.

This tension matters far beyond consumer apps. Banks running predictive models, healthcare systems trialling AI diagnostics, and retailers building recommendation engines all depend on access to information. If one half of the market embraces the exchange while the other digs in its heels, the path to scaled AI becomes far from straightforward.

Generations are divided on sharing

The figures tell the story. Roughly one in five people aged 18 to 24 are happy to give their date of birth to a company they do not know. Among those in their late forties and early fifties, it is fewer than one in ten. Lee Edwards, Vice President for EMEA at Amplitude, which develops digital analytics platforms to help organisations understand customer behaviour, believes the contrast reflects a deeper cultural divide that executives cannot ignore.

“We are definitely seeing a significant generational gap when it comes to data sharing attitudes,” Edwards says. “Younger generations are much more open when it comes to sharing. Rather than treating data sharing primarily as a privacy risk, many younger consumers see it as a natural exchange that enhances their experiences. They have grown up in an environment where personalised recommendations are expected parts of everyday digital interactions.”

The mood among older users is less forgiving. “They have witnessed firsthand how privacy concerns have evolved over decades, so they tend to need stronger reassurances before sharing personal details,” Edwards continues. “The challenge for enterprises is clear enough: how to build systems that reward the enthusiasm of the young without alienating those who prefer to keep a tighter grip on their personal details.”

For leaders, the real question is whether to tilt strategies toward the cautious or the carefree. Do you build to satisfy sceptics, or do you double down on the expectations of digital natives who want frictionless personalisation? The answer is awkward: both. Markets are too fragmented to allow for a single line of attack.

Trust as the universal currency

It would be easy to view this solely as a generational clash, but that overlooks the broader context. Trust remains a constant across all age groups. Whether people are 25 or 65, they will only share data if they believe the recipient will handle it with care and purpose.

“An overwhelming 94 per cent of consumers across every age bracket point to trust as the foundation of their decisions. About 64 per cent would readily share personal details with an organisation they trust, while only 36 per cent would do the same with a company they are unfamiliar with,” Edwards explains.

This trust is not a static thing. It rises when a recommendation saves time or an interaction feels seamless. It falls when organisations ask for irrelevant details or push too hard. It is no longer enough for IT departments to cite compliance rules. Trust is a lived experience, measured by outcomes people notice.

Across EMEA, younger users see data sharing as part of a healthier digital economy. Older groups think in terms of risks avoided. Both instincts are valid. Enterprises that ignore these nuances will find themselves adrift, unable to build the depth of relationship needed for ambitious AI projects.

Good use cases matter most

Consumers are pragmatic. They do not read privacy notices for entertainment. What they want are results that make sense in their daily lives. “Trust has evolved far beyond basic compliance. Consumers are increasingly demanding transparency around effective use cases for their data. They want organisations to explain not just how their information will be protected, but the tangible benefits they will receive in return,” Edwards says.

Retail offers an obvious lesson. An app that uses shopping history to flag discounts or suggest useful extras quickly proves its worth. Customers see the trade. But when a news site insists on access to contacts or location without any explanation, the exchange feels one-sided. “Collecting excessive data without a clear purpose is precisely what undermines trust. A simple news website that requests access to location, contacts, and other personal information without explanation feels intrusive and creates suspicion,” Edwards continues.

For executives weighing up AI investments, the message is blunt. Do not gather more than you need. Be clear about why you need it. Show benefits quickly. Value must be visible, not buried in promises.

Personalisation and its limits

AI has pushed personalisation to new territory. What began as segmentation now edges towards prediction. In the best cases, it feels like relevance: time saved, fewer pointless offers. In the worst, it feels unsettling, as if the system knows too much. “There absolutely is such a thing as being too personalised, when it makes people feel exposed rather than understood. The line is about respecting user expectations and context,” Edwards explains.

One answer is to take it step by step. Progressive personalisation enables organisations to start simple and gradually build complexity as users become more comfortable with it. This gradualist approach acknowledges that comfort levels vary not only by generation but also by culture and geography.

Algorithms can spot patterns with astonishing accuracy. But accuracy is not the same as judgment. “There is absolutely a risk that purely algorithmic approaches might miss important cultural and generational nuances,” Edwards continues. “Data can tell us what is happening, but often struggles to explain why. The most effective approach combines the pattern recognition capabilities of algorithms with human insight that contextualises findings and identifies flaws.”

That is particularly important in sensitive fields. A bank that pushes too hard with predictive models risks spooking customers. A hospital using AI for diagnostics must tread carefully to avoid eroding trust between patient and clinician. Personalisation must serve the individual, not expose them.

Balancing innovation and caution

The divide is not just outside the organisation. It plays out inside boardrooms and project teams. Younger staff want bold experiments. Older colleagues stress restraint. “Smart leaders recognise that both perspectives have value,” Edwards says. “The key is building clear ethical frameworks and data governance models that enable innovation while establishing guardrails for responsible use. This is not just about technical safeguards but creating a culture where data ethics is part of every stage of product development.”

Handled well, that tension sharpens decision-making. It forces innovators to answer tough questions. It ensures caution is not paralysing but constructive. Ignored, it becomes a drag on progress, with projects either recklessly rushed or endlessly stalled.

Even the most sophisticated AI system is useless if staff do not understand how to work with it. Data literacy is not optional. It is the foundation of adoption. “Building data literacy requires both education and cultural change,” Edwards adds. Organisations need to make complex concepts accessible regardless of technical background, moving away from jargon and focusing on practical use cases that demonstrate value.”

Literacy here is not about knowing every algorithm. It is about confidence: knowing when to trust a result, when to question it, and when to escalate concerns. It is also about ethics. Employees must be able to spot when an output might be biased or unfair. Culture matters as much as skill. Organisations need to normalise curiosity. Staff should be encouraged to ask why a system produced a particular outcome, not discouraged. When questions are welcomed, blind spots shrink.

Preparing for the next generation

The next wave of consumers will expect AI personalisation as the default. They will also demand greater transparency and control. Reconciling those expectations will test even the most advanced organisations. “The future will belong to organisations that combine sophisticated data capabilities with genuine respect for consumer preferences,” Edwards concludes. “Companies should invest in systems that allow for granular consent and preference management while developing clear communication around data practices.”

For leaders, the warning is simple. The competitive advantage will not come from amassing the most extensive datasets or deploying the most complex models. It will come from building systems flexible enough to adapt to shifting attitudes, treating ethics as a strategic asset rather than a compliance chore.

The generational divide in data attitudes is a real phenomenon. But it does not have to be permanent. Trust, once earned, can cross generations. It is the one currency that underpins every successful deployment of AI.

Related Posts
Others have also viewed

The data centre is now the machine

For years, artificial intelligence has been framed as a software problem, defined by models, algorithms, ...

Why the next phase of AI will be built in gigawatts not models

Artificial intelligence is moving into an industrial phase where scale, power and physical infrastructure matter ...

The front-runners are no longer experimenting

Most enterprises believe they are doing AI. Very few are reinventing themselves around it. Accenture’s ...

The AI hangover is real, and the hard work is only just starting

The first wave of enterprise AI delivered experimentation at unprecedented speed but left many organisations ...