Generative AI is driving innovation by bridging research gaps and accelerating new discoveries across sectors. Mark Venables spoke with Anthony Schiavo, Senior Director at Lux Research about the company’s latest eBook that explores how AI reshapes R&D, helping industries overcome traditional limitations and unlock unprecedented opportunities.
Generative AI, especially through large language models (LLMs), is advancing innovation by bridging research gaps, enhancing efficiency, and creating new knowledge. Lux Research’s eBook, You, Me, GPT: Unpacking Generative AI’s Impact on Innovation, explores AI’s transformative role within today’s dynamic context. As AI gains traction across sectors, its applications extend well beyond simple automation.
“AI offers a path to discoveries that might otherwise depend on sheer luck,” says Anthony Schiavo, Senior Director at Lux Research. “In fields where vast combinations of data need to be tested, AI is invaluable. It can systematically sift through millions of possibilities to identify insights and patterns that humans might miss.” This structured approach to handling vast datasets redefines innovation, allowing industries to explore uncharted territories that traditional methods couldn’t access.
Generating insights from complex data
AI’s ability to process and interpret large datasets is particularly valuable in research-heavy fields where conventional data organisation techniques fall short. “Generative AI excels at the semantic analysis needed to truly understand what patents describe, the potential technologies involved, and where new innovation opportunities exist,” Schiavo explains. “By applying these tools, companies can streamline information retrieval, transforming how they explore patent and academic literature. This advantage is especially evident in fields like pharmaceuticals or materials science, where unlocking new insights can directly lead to breakthrough products.”
With tools like Microsoft’s Copilot, R&D teams are now able to semantically search data repositories, identify past projects, and prevent redundant work, optimizing resources without stalling ongoing efforts. Although these applications primarily boost efficiency, Schiavo notes that the long-term potential for AI to transform R&D is immense. “LLMs, for instance, provide a foundation for efficiency,” he says. “But they also pave the way for automation to drive more substantial innovations. By allowing experts to build on existing work and focus on high-level, creative tasks, AI is slowly reshaping the way we approach research, making it both faster and more precise.” In materials science, for example, AI analysis accelerates discoveries, reducing manual labour where datasets and complex models intersect.
Breaking new ground in R&D
The influence of generative AI is visible in industries that depend heavily on data-driven discovery, such as materials science and pharmaceuticals, where complex data needs to be mined for actionable insights. Schiavo draws parallels between AI and Napier’s bones, a 17th-century tool that simplified complex mathematical operations and fuelled the age of exploration. “AI is like a new set of Napier’s bones, unlocking new opportunities just as that tool did centuries ago,” he says. “There are many areas where we know better solutions exist but finding them through traditional means is extremely time- or resource-intensive. High-entropy alloys, which combine multiple elements for high-temperature resistance, are one example. With AI, we can now predict their properties by bypassing traditional simulations and organizing existing research data to highlight promising, unexplored combinations.” This efficiency allows industries to reduce reliance on trial and error, approaching data-rich fields with more precision.
For some organizations, this structured AI-driven approach has already yielded transformative results. Cabot Corporation, for instance, initially met AI’s unconventional suggestions with scepticism. Yet the insights quickly proved valuable, pushing battery research into previously unexplored areas. “Cabot’s team was initially hesitant because AI can often feel like a black box, but once they tested its suggestions, they found entirely new directions in battery materials,” Schiavo elaborates. “While they didn’t understand every aspect of the model’s process, the results spoke for themselves. AI enabled them to explore combinations they might not have otherwise considered, expanding their understanding of battery technology rapidly.”
Trust and accountability in AI outputs
Despite its promise, the probabilistic nature of generative AI brings trust challenges, especially in high-stakes industries where accuracy is paramount. “A key issue is responsibility, AI cannot be held accountable for decisions in the same way that a human can,” Schiavo continues. “Organisations using AI need to implement checks and balances to ensure that final decisions are made with a human in the loop. This is especially important in sectors like healthcare and manufacturing, where a simple error could have significant consequences.”
He further stresses the importance of approaches like retrieval-augmented generation (RAG), which integrates verified human-generated content to ground AI outputs and reduce hallucinations or inaccurate responses. However, even RAG models face challenges if they rely on contaminated or low-quality data, particularly in fields with a high volume of AI-generated content. “If AI-generated patents or research flood the system, finding reliable insights becomes difficult,” Schiavo warns. “As the body of AI-generated content grows, we risk embedding inaccuracies and losing valuable knowledge in a sea of noise. The need for human oversight is therefore essential, to verify AI outputs and maintain quality.” This blend of human intervention with AI-driven processes highlights the importance of responsible AI deployment, particularly in industries that require accountability and reliability.
The ethical implications of AI on labour
As AI continues to reshape innovation, it also impacts labour, raising ethical concerns regarding the working conditions of those who support AI development. Schiavo points out that many AI models rely on human labour for tasks such as data labelling and translation, often performed by low-paid workers in the Global South. “There’s a significant ethical issue in the labour that supports AI,” he says. “Many of these models are built on datasets labelled by workers who receive minimal compensation for what are often very tedious tasks. People might not realize that AI tools they use every day have origins in labour practices that they wouldn’t find acceptable for themselves.”
Highlighting the ethical importance of transparency, Schiavo adds that greater awareness about AI’s development process could foster a more responsible approach within the industry. The ethical landscape extends beyond development, affecting sectors where AI is deployed, such as healthcare and legal services. These fields require high levels of transparency, yet AI’s opaque models can complicate accountability. Schiavo recommends that organisations adopt strong ethical guidelines to govern AI deployment, particularly where the social implications are significant. “If AI is used in fields like law or medicine, we need clear guidelines to ensure that human rights and ethics remain at the forefront. It’s critical for organisations to implement ethical checks, ensuring that AI is not only effective but also responsibly deployed.”
AI’s influence on workforce dynamics
Generative AI’s growing presence also presents challenges and opportunities for the workforce. Schiavo compares AI’s potential impact on labour to the gig economy, where technological advances have transformed stable jobs into casualised roles. “In some cases, AI can degrade working conditions, particularly if labour protections are weak,” he explains. “The gig economy is a cautionary tale, where stable work has been replaced by short-term contracts with lower wages and fewer protections. Without a strong regulatory framework, AI could have a similar impact on knowledge work, driving down wages and reducing job quality.” The potential for AI to fragment and casualize knowledge work presents a risk for industries, especially if protections are not in place to ensure fair labour standards.
However, Schiavo remains optimistic about AI’s potential to democratize advanced research roles. By lowering skill barriers, AI-driven tools allow more people to participate in high-level tasks within R&D, widening access to scientific research and creating new career paths. “AI has the potential to transform R&D by making it more accessible,” he says. “Tools that lower the entry barriers allow a broader group to engage in scientific research, which is crucial for industries like manufacturing. But without protective policies, this democratisation might be undercut by the same issues we’ve seen in the gig economy.” This democratization could reshape the workforce, if companies and policymakers implement frameworks that support fair labour practices.
Economic viability and the evolving AI landscape
Large-scale AI models require substantial resources, both financial and physical, which has raised questions about the sustainability of such technologies. Schiavo highlights the high costs of training these models, including energy and computing power, as a persistent challenge. “The economic landscape of AI is uncertain,” he explains. “The resources needed to sustain these models are considerable, and while the benefits of sophisticated AI are clear, the costs make it challenging for many organisations. With open-source alternatives now available, companies need to think strategically about where to invest in proprietary tools and where open-source solutions might suffice.” This trend toward open-source tools offers cost-effective solutions that could shift the industry, providing new avenues for AI implementation.
As companies weigh these options, Schiavo suggests that hybrid strategies, integrating both proprietary and open-source AI, could provide a balanced approach to managing costs. “A hybrid approach can mitigate some of the financial burden of proprietary models while maintaining the advanced capabilities required in certain fields,” he adds. “As the AI landscape evolves, companies will increasingly focus on technologies that offer scalability and sustainability. These strategies reflect the growing importance of financially viable AI solutions, especially for industries with tight operational budgets.”
Strategic AI applications driving value
In the future, AI’s most valuable applications will emerge in fields that require both automation and new knowledge creation. Healthcare diagnostics exemplifies this dual benefit, where AI tools support diagnostic accuracy while addressing labour shortages in skilled fields. “In high-labour, high-knowledge fields like healthcare, AI’s dual role in automating and generating new insights is immensely valuable,” Schiavo says. “Applications like medical diagnostics are already seeing benefits, as AI can help identify disease markers and provide insights that might not be immediately apparent. This combined impact makes AI an invaluable asset for industries with specialized skill requirements.”
Schiavo advises companies exploring AI to take a phased approach, moving from pilot projects to broader applications as they refine their strategies. Lux’s eBook details stages of AI adoption, beginning with insight generation and moving through ideation, investigation, and eventual implementation.




