AI is changing who we hire and how we think about work

Share this article

AI is transforming the workforce by redefining how talent is sourced, matched, and evaluated, with freelancers leading the adoption of new tools and practices. As enterprises confront bias, privacy, and the limitations of automation, a European model of responsible AI deployment is emerging.

AI is not simply a tool to automate processes or accelerate decisions. It is becoming a catalyst for more profound structural change in how organisations find, use, and think about talent. The rise of freelancing, already a pronounced shift in workforce dynamics, is now intersecting with AI-driven platforms that can match skills with unprecedented speed and accuracy. But the most important change is not about speed. It is about mindset.

Freelancers, by definition, sit outside the traditional corporate hierarchy. That puts them at the edge of the labour market, where technological change often bites first and hardest. What is striking is not just their rapid uptake of new tools but how their behaviour signals a broader cultural shift. Many spend half a day each week on upskilling. That statistic alone reflects an agency and intensity of learning that conventional employee structures often suppress. These are individuals who choose to remain technical doers rather than climbing the management ladder precisely because they want to stay at the coalface of innovation.

“The main hypothesis is that these are people who are passionate about their domain,” Claire Lebarz, Chief Technology and Chief Data & AI Officer at Malt, says. “They want to keep doing what they love to do and enjoy expert and doer roles more so than management.”

This dynamic, she argues, offers a real-time view of how demand is changing in the AI-driven enterprise. The fastest-growing freelance roles are not in niche experimental technologies but in full-stack AI, adaptable engineering, and cross-domain capabilities. The democratisation of AI tools is reflected in the 40 per cent growth in demand for low-code and no-code expertise. As Lebarz puts it, “It is a sign of the broader shift towards making AI accessible across the organisation.”

Bias is built in at the beginning

AI recruitment tools, particularly those powered by large language models, are increasingly being used to assess technical competencies. But that alone does not capture what really changes when algorithms replace job boards. What matters most is what happens before the first interview.

Recruitment processes are often shaped by shorthand proxies, preferred schools, familiar employers, and CV keywords. These are efficient filters, but they are riddled with bias. AI can remove some of these by reorienting selection around skills and capabilities rather than prestige or SEO. However, the danger is that AI simply amplifies existing blind spots if the source data is flawed or too narrow in scope.

“The biggest challenge is not in parsing resumes; it is in how clients describe what they need,” Lebarz explains. “That is where we see the most bias. We have built systems that send gendered, gender-neutral, and feminine versions of job prompts to our models, so we can detect and mitigate those biases in the way demand is framed.”

This ability to interrogate language as data is increasingly crucial. In multilingual contexts, such as French, German, and Spanish, gender is structurally embedded in grammar, and therefore, the recruitment prompts themselves can carry unintentional bias. What matters, Lebarz says, is not simply building good algorithms but building robust evaluation frameworks before deploying anything at scale. That is what distinguishes AI engineering from product gimmickry.

AI gets you to the conversation faster

One of the most persistent misconceptions about AI in recruitment is the notion that it is intended to replace human interaction. Its most valuable role is in redefining who is even considered worth talking to. “We do not believe AI should replace people talking to people,” Lebarz adds. “Its role is in shaping the consideration set, who gets that first conversation. AI should improve accuracy, not replace human judgment.”

That distinction explains the hybrid approach Malt takes to talent matching. For hard skills and technical requirements, algorithmic parsing works. For soft skills and cultural fit, it does not, at least not yet. Instead of pretending otherwise, the system is built to keep humans in the loop where ambiguity or nuance matters most.

There are also pragmatic concerns. AI-generated content, such as auto-written CVs, can backfire, especially if hiring platforms penalise content that appears artificial. Lebarz is clear about where platforms should take responsibility. “We do not offer AI-generated CVs,” she continues. “What we want to do is remove the pressure on freelancers to optimise their profiles with keywords. The goal is to let them describe what they have done, not how shiny it looks on search.”

A European approach to control and privacy

Much of the current narrative around AI talent platforms is dominated by global players using monolithic LLMs and US-based cloud providers. Lebarz is deliberately building a different model. Malt’s technical architecture is based on smaller, task-specific language models trained on ten years of match data across hundreds of thousands of projects. That decision is as much ethical as it is functional.

“These models have a much smaller carbon footprint and are more efficient for the specific tasks we need,” Lebarz explains. “We can also control and audit their behaviour more easily.” Crucially, the models are hosted in Europe, with full GDPR compliance and strict limits on data sharing. Freelancers maintain complete control over the information they provide. The platform does not scrape external sources or infer data from social media. This adherence to data minimisation and user control aligns with broader concerns about digital sovereignty and AI regulation across the continent.

What emerges is a model of AI deployment rooted in transparency, modularity, and user choice. Clients can opt out of using AI-based matching tools if they prefer. That opt-out is not simply a gesture. It reflects a deeper design philosophy around user agency. “Giving users the choice to engage with AI or not is one of our core principles,” Lebarz says.

Skills, sovereignty and the speed of change

If AI is indeed reshaping the future of work, then the skills landscape it creates will not just reward narrow technical ability. It will foster adaptability, cross-domain thinking, and an increased sensitivity to ethical and political contexts. This year’s data trends at Malt reflect all three.

“AI demand is still growing fastest, 230 per cent year on year, but the interesting shift is in the nature of the roles,” Lebarz explains. “Python has surged as a top skill. There is a growing demand for European cloud providers. Open-source alternatives are accelerating. And cybersecurity roles that intersect with AI are the fastest growing in that space.”

There is also a conspicuous absence. Despite the rising energy demands of AI infrastructure, there is little evidence in Malt’s data of a corresponding demand for green tech or environmental AI skills. This may be a temporary oversight, but it also reveals how far AI discourse remains decoupled from broader ESG concerns, at least in terms of practical hiring needs.

Who shapes the future of work?

Lebarz is no stranger to the broader strategic conversation around AI. Her background spans technical leadership at Airbnb and policymaking roles intersecting product development and ethics. When she attended France’s AI summit and briefly met President Emmanuel Macron, the tone of the discussion was not risk or regulation. It was urgency and investment.

“There is a clear desire to build a third path for AI, distinct from the US or China, based on European values of control, safety, and inclusion. But the safety conversation was largely absent. That was a missed opportunity,” she observes.

The same tension applies to corporate deployment. Leaders are tempted to let AI systems define not just the ‘who’ of hiring but also the ‘how’ and ‘why’. This is where values matter most. “Whether we let AI shape organisational strategy is not a technical inevitability,” Lebarz concludes. “It is a choice. That is why I stay close to product, so I can help shape what we build and how we build it,”

The future of AI in talent is not set in stone. It is a moving negotiation between precision and empathy, automation and agency, speed and trust. The questions it raises go beyond matching CVs to job specs. They ask us to redefine the nature of expertise, the ethics of design, and the governance of intelligence. That is a conversation worth having and one that must be led by humans.

Related Posts
Others have also viewed

A new era for AI ecosystem innovation

David Terry, Schneider Electric’s AI Enterprise & Alliance Partner Director for EMEA discusses the emergence ...

AI-scale cooling enters a new phase as data centres seek waterless thermal control

As artificial intelligence reshapes the demands placed on digital infrastructure, data centres face mounting pressure ...

NVIDIA raises the stakes as AI inference enters its industrial phase

As artificial intelligence shifts from experimental models to full-scale production, the economic engine powering it, ...

AI data centres drive demand for real-time renewable energy tracking

A new energy agreement covering nLighten’s French data centres signals a shift in how AI-driven ...