Brand Stories
Humanists pass global declaration on artificial intelligence and human values
Representatives of the global humanist community collectively resolved to pass The Luxembourg Declaration on Artificial Intelligence and Human Values at the 2025 general assembly of Humanists International, held in Luxembourg on Sunday 6 July.
Drafted by Humanists UK with input from leading AI experts and other member organisations of Humanists International, the declaration outlines a set of ten shared ethical principles for the development, deployment, and regulation of artificial intelligence (AI) systems. It calls for AI to be aligned with human rights, democratic oversight, and the intrinsic dignity of every person, and for urgent action from governments and international bodies to make sure that AI serves as a tool for human flourishing, not harm.
Humanists UK patrons Professor Kate Devlin and Dr Emma Byrne were among the experts who consulted on an early draft of the declaration, prior to amendments from member organisations. Professor Devlin is Humanists UK’s commissioner to the UK’s AI Faith & Civil Society Commission.
Defining the values of our AI future
Introducing the motion on the floor of the general assembly, Humanists UK Director of Communications and Development Liam Whitton urged humanists to recognise that the AI revolution was not a distant prospect on the horizon but already upon us. He argued that it fell to governments, international institutions, and ultimately civil society to define the values against which AI models should be trained, and the standards by which AI products and companies ought to be regulated.
Uniquely, humanists bring to the global conversation a principled secular ethics grounded in evidence, compassion, and human dignity. As governments and institutions grapple with the challenge of ‘AI alignment’ – ensuring that artificial intelligence reflects and respects human values – humanists offer a hopeful vision, rooted in a long tradition of thought about human happiness, moral progress, and the common good.
Read the Luxembourg Declaration on Artificial Intelligence and Human Values:
Adopted by the Humanists International General Assembly, Luxembourg, 2025.
In the face of artificial intelligence’s rapid advancement, we stand at a unique moment in human history. While new technologies offer unprecedented potential to enhance human flourishing, handled carelessly they also pose profound risks to human freedoms, human security, and our collective future.
AI systems already pervade innumerable aspects of human life and are developing far more rapidly than current ethical frameworks and governance structures can adapt. At the same time, the rapid concentration of these powerful capabilities within a small number of hands threatens to issue new challenges to civil liberties, democracies, and our vision of a more just and equal world.
In response to these historic challenges, the global humanist community affirms the following principles on the need to align artificial intelligence with human values rooted in reason, evidence, and our shared humanity:
- Human judgment: AI systems have the potential to empower and assist individuals and societies to achieve more in all aspects of human life. But they must never displace human judgment, human reason, human ethics, or human responsibility for our actions. Decisions that deeply affect people’s lives must always remain in human hands.
- Common good: Fundamentally, states must recognise that AI should be a tool to serve humanity rather than enrich a privileged few. The benefits of technological advancement should flow widely throughout society rather than concentrate power and wealth in ever-fewer hands.
- Democratic governance: New technologies must be democratically accountable at all levels – from local communities and small private enterprises through to large multinationals and countries. No corporation, nation, or special interest should wield unaccountable power through technologies with potential to affect every sphere of human activity. Lawmakers, regulators, and public bodies must develop and sustain the expertise to keep pace with AI’s evolution and respond to emerging challenges.
- Transparency and autonomy: Citizens cannot meaningfully participate in democracies if the decisions affecting their lives are opaque. Transparency must be embedded not only in laws and regulations, but in the design of AI systems themselves — designed responsibly, with clear intent and purpose, and full human accountability. Laws should guarantee that every individual can freely decide how their personal data is used, and grant all citizens the means to query, contest, and shape how technologies are deployed.
- Protection from harm: Protecting people from harm must be a foundational principle of all AI systems, not an afterthought. As AI risks amplifying existing injustices in society – including racism, sexism, homophobia, and ableism – states and developers must act to prevent its use in discrimination, manipulation, unjust surveillance, targeted violence, or the suppression of lawful speech. Governments and business leaders must commit to long-term AI safety research and monitoring, aligning future AI systems with human goals, desires, and needs.
- Shared prosperity: Previous industrial revolutions pursued progress without sufficient regard for human suffering. Today we must not. Technological advancement cannot be allowed to erode human dignity or entrench social divides. A truly human-centric approach demands bold investment in training, education, and social protections to enhance jobs, protect human dignity, and support those workers and communities most affected.
- Creators and artists: Properly harnessed, AI can help more people enjoy the benefits of creativity — expressing themselves, experimenting with new ideas, and collaborating in ways that bring personal meaning and joy. But we must continue to recognise and protect the unique value that human artists bring to creative work. Intellectual property frameworks must guarantee fair compensation, attribution, and protection for human artists and creators.
- Reason, truth, and integrity: Human freedom and progress depend on our ability to distinguish truth from falsehood and fact from fiction. As AI systems introduce new and far-reaching risks to the integrity of information, legal frameworks must rise to protect free inquiry, freedom of expression, and the health of democracy itself from the growing threat of misinformation, disinformation, and deliberate deception at scale.
- Future generations: The choices we make about AI today will shape the world for generations to come. Governments, civil society, and technology leaders must remain vigilant and act with foresight – prioritising the mitigation of environmental harms and long-term risks to human survival. These decisions must be guided by our responsibilities not only to one another, but to future generations, the ecosystem we rely on, and the wider animal kingdom.
- Human freedom, human flourishing: The ultimate value of AI will lie in its contribution to human happiness. To that end, we should embed shared values that promote human flourishing into AI systems — and be ambitious in using AI to maximise human freedom. For individuals, this could mean more time at leisure, pursuing passion projects, learning, reflecting, and making richer connections with other human beings. Collectively, we should realise these benefits by making advances in science and medicine, resolving pressing global challenges, and addressing inequalities within our societies.
We commit ourselves as humanist organisations and as individuals to advocating these same principles in the governance, ethics, and deployment of AI worldwide.
We affirm the importance of humanist values to navigating these new frontiers – only by prioritising reason, compassion, dignity, freedom, and our shared humanity can human societies adequately navigate these emerging challenges.
We call upon governments, corporations, civil society, and individuals to adopt these same principles through concrete policies, practices, and international agreements, taking this opportunity to renew our commitments to human rights, human dignity, and human flourishing now and always.
Previous Humanists International declarations – binding statements of organisational policy recognising outlooks, policies, and ethical convictions shared by humanist organisations in every continent – include the Auckland Declaration against the Politics of Division (2018), Reykjavik Declaration on the Climate Change Crisis (2019), and the Oxford Declaration on Freedom of Thought and Expression (2014). Traditionally, humanist organisations have marshalled these declarations as resources in their domestic and UN policy work, such as in Humanists UK’s advocacy of robust freedom of expression laws, or in formalising specific programmes of voluntary work, such as that of Humanist Climate Action in the UK.
Notes
For further comment or information, media should contact Humanists UK Director of Public Affairs and Policy Richy Thompson at press@humanists.uk or phone 0203 675 0959.
From 2022: The time has come: humanists must define the values that will underpin our AI future.
Humanists UK is the national charity working on behalf of non-religious people. Powered by over 150,000 members and supporters, we advance free thinking and promote humanism to create a tolerant society where rational thinking and kindness prevail. We provide ceremonies, pastoral care, education, and support services benefitting over a million people every year and our campaigns advance humanist thinking on ethical issues, human rights, and equal treatment for all.
Brand Stories
Studying a galaxy far, far away could become easier with help from AI, says researcher
A recent Memorial University of Newfoundland graduate says his research may help study galaxies more efficiently — with help from Artificial Intelligence.
As part of Youssef Zaazou’s master’s of science, he developed an AI-based image-processing technique that generates predictions of what certain galaxies may look like in a given wavelength of light.
“Think of it as translating galaxy images across different wavelengths of light,” Zaazou told CBC News over email.
He did this by researching past methods for similar tasks, adapting current AI tools for his specific purposes, finding and curating the right dataset to train the models, along with plenty of trial and error.
“Instead of … having to look at an entire region of sky, we can get predictions for certain regions and figure out, ‘Oh this might be interesting to look at,'” said Zaazou. “So we can then prioritize how we use our telescope resources.”
Zaazou recently teamed up with his supervisors Terrence Tricco and Alex Bihlo to co-author a paper on his research in The Astrophysical Journal, which is published by The American Astronomical Society.
Tricco says this research could also help justify allocation of high-demand telescopes like the Hubble Space Telescope, which has a competitive process to assign its use.
A future for AI in astronomy
Both Tricco and Zaazou emphasised the research does not use AI to replace current methods but to augment them.
Tricco says that Zaazou’s findings have the potential to help guide future telescope development, and predict what astronomers might expect to see, making for more efficient exploration.
Calling The Astrophysical Journal the “gold standard” for astronomy journals in the world, Tricco hopes the wider astronomical community will take notice of Zaazou’s findings.
“We want to have them be aware of this because as I was mentioning, AI, machine learning, and physics, astronomy, it’s still very new for physicists and for astronomers, and they’re a little bit hesitant about these tools,” said Tricco.
Tricco praised the growing presence of space research in general at Memorial University.
“We are here, we’re doing great research,” he said.
He added growing AI expertise is also transferable to other disciplines.
“I think that builds into our just tech ecosystem here as well.”
‘Only the beginning’
Though Zaazou’s time as a Memorial University student is over, he hopes to see research in this area continue to grow.
“I’m hoping this is the beginning of further research to be done,” he said.
Though Zaazou described his contribution to the field as merely a “pebble,” he’s happy to have been able to do his part.
“I’m an astronomer. And it just feels great to be able to say that and to be able to have that little contribution because I just love the field and I’m fascinated by everything out there,” said Zaazou.
Download our free CBC News app to sign up for push alerts for CBC Newfoundland and Labrador. Sign up for our daily headlines newsletter here. Click here to visit our landing page.
Brand Stories
‘You can make really good stuff – fast’: new AI tools a gamechanger for film-makers | Artificial intelligence (AI)
A US stealth bomber flies across a darkening sky towards Iran. Meanwhile, in Tehran a solitary woman feeds stray cats amid rubble from recent Israeli airstrikes.
To the uninitiated viewer, this could be a cinematic retelling of a geopolitical crisis that unfolded barely weeks ago – hastily shot on location, somewhere in the Middle East.
However, despite its polished production look, it wasn’t shot anywhere, there is no location, and the woman feeding stray cats is no actor – she doesn’t exist.
The engrossing footage is the “rough cut” of a 12-minute short film about last month’s US attack on Iranian nuclear sites, made by the directors Samir Mallal and Bouha Kazmi. It is also made entirely by artificial intelligence.
The clip is based on a detail the film-makers read in news coverage of the US bombings – a woman who walked the empty streets of Tehran feeding stray cats. Armed with the information, they have been able to make a sequence that looks as if it could have been created by a Hollywood director.
The impressive speed and, for some, worrying ease with which films of this kind can be made has not been lost on broadcasting experts.
Last week Richard Osman, the TV producer and bestselling author, said that an era of entertainment industry history had ended and a new one had begun – all because Google has released a new AI video making tool used by Mallal and others.
“So I saw this thing and I thought, ‘well, OK that’s the end of one part of entertainment history and the beginning of another’,” he said on The Rest is Entertainment podcast.
Osman added: “TikTok, ads, trailers – anything like that – I will say will be majority AI-assisted by 2027.”
For Mallal, a award-winning London-based documentary maker who has made adverts for Samsung and Coca-Cola, AI has provided him with a new format – “cinematic news”.
The Tehran film, called Midnight Drop, is a follow-up to Spiders in the Sky, a recreation of a Ukrainian drone attack on Russian bombers in June.
Within two weeks, Mallal, who directed Spiders in the Sky on his own, was able to make a film about the Ukraine attack that would have cost millions – and would have taken at least two years including development – to make pre-AI.
“Using AI, it should be possible to make things that we’ve never seen before,” he said. “We’ve never seen a cinematic news piece before turned around in two weeks. We’ve never seen a thriller based on the news made in two weeks.”
Spiders in the Sky was largely made with Veo3, an AI video generation model developed by Google, and other AI tools. The voiceover, script and music were not created by AI, although ChatGPT helped Mallal edit a lengthy interview with a drone operator that formed the film’s narrative spine.
Google’s film-making tool, Flow, is powered by Veo3. It also creates speech, sound effects and background noise. Since its release in May, the impact of the tool on YouTube – also owned by Google – and social media in general has been marked. As Marina Hyde, Osman’s podcast partner, said last week: “The proliferation is extraordinary.”
Quite a lot of it is “slop” – the term for AI-generated nonsense – although the Olympic diving dogs have a compelling quality.
Mallal and Kazmi aim to complete the film, which will intercut the Iranian’s story with the stealth bomber mission and will be six times the length of Spider’s two minutes, in August. It is being made by a mix of models including Veo3, OpenAI’s Sora and Midjourney.
“I’m trying to prove a point,” says Mallal. “Which is that you can make really good stuff at a high level – but fast, at the speed of culture. Hollywood, especially, moves incredibly slowly.”
after newsletter promotion
He adds: “The creative process is all about making bad stuff to get to the good stuff. We have the best bad ideas faster. But the process is accelerated with AI.”
Mallal and Kazmi also recently made Atlas, Interrupted, a short film about the 3I/Atlas comet, another recent news event, that has appeared on the BBC.
David Jones, the chief executive of Brandtech Group, an advertising startup using generative AI – the term for tools such as chatbots and video generators – to create marketing campaigns, says the advertising world is about to undergo a revolution due to models such as Veo3.
“Today, less than 1% of all brand content is created using gen AI. It will be 100% that is fully or partly created using gen AI,” he says.
Netflix also revealed last week that it used AI in one of its TV shows for the first time.
However, in the background of this latest surge in AI-spurred creativity lies the issue of copyright. In the UK, the creative industries are furious about government proposals to let models be trained on copyright-protected work without seeking the owner’s permission – unless the owner opts out of the process.
Mallal says he wants to see a “broadly accessible and easy-to-use programme where artists are compensated for their work”.
Beeban Kidron, a cross-bench peer and leading campaigner against the government proposals, says AI film-making tools are “fantastic” but “at what point are they going to realise that these tools are literally built on the work of creators?” She adds: “Creators need equity in the new system or we lose something precious.”
YouTube says its terms and conditions allow Google to use creators’ work for making AI models – and denies that all of YouTube’s inventory has been used to train its models.
Mallal calls his use of AI to make films “prompt craft”, a phrase that uses the term for giving instructions to AI systems. When making the Ukraine film, he says he was amazed at how quickly a camera angle or lighting tone could be adjusted with a few taps on a keyboard.
“I’m deep into AI. I’ve learned how to prompt engineer. I’ve learned how to translate my skills as a director into prompting. But I’ve never produced anything creative from that. Then Veo3 comes out, and I said, ‘OK, finally, we’re here.’”
Brand Stories
AI’s next leap demands a computing revolution
We stand at a technological crossroads remarkably similar to the early 2000s, when the internet’s explosive growth outpaced existing infrastructure capabilities. Just as dial-up connections couldn’t support the emerging digital economy, today’s classical computing systems are hitting fundamental limits that will constrain AI’s continued evolution. The solution lies in quantum computing – and the next five to six years will determine whether we successfully navigate this crucial transition.
The computational ceiling blocking AI advancement
Current AI systems face insurmountable mathematical barriers that mirror the bandwidth bottlenecks of early internet infrastructure. Training large language models like GPT-3 consumes 1,300 megawatt-hours of electricity, while classical optimization problems require exponentially increasing computational resources. Google’s recent demonstration starkly illustrates this divide: their Willow quantum processor completed calculations in five minutes that would take classical supercomputers 10 septillion years – while consuming 30,000 times less energy.
The parallels to early 2000s telecommunications are striking. Then, streaming video, cloud computing, and e-commerce demanded faster data speeds that existing infrastructure couldn’t provide. Today, AI applications like real-time molecular simulation, financial risk optimization, and large-scale pattern recognition are pushing against the physical limits of classical computing architectures. Just as the internet required fiber optic cables and broadband infrastructure, AI’s next phase demands quantum computational capabilities.
Breakthrough momentum accelerating toward mainstream adoption
The quantum computing landscape has undergone transformative changes in 2024-2025 that signal mainstream viability. Google’s Willow chip achieved below-threshold error correction – a critical milestone where quantum systems become more accurate as they scale up. IBM’s roadmap targets 200 logical qubits by 2029, while Microsoft’s topological qubit breakthrough promises inherent error resistance. These aren’t incremental improvements; they represent fundamental advances that make practical quantum-AI systems feasible.
Industry investments reflect this transition from research to commercial reality. Quantum startups raised $2 billion in 2024, representing a 138 per cent increase from the previous year. Major corporations are backing this confidence with substantial commitments: IBM’s $30 billion quantum R&D investment, Microsoft’s quantum-ready initiative for 2025, and Google’s $5 million quantum applications prize. The market consensus projects quantum computing revenue will exceed $1 billion in 2025 and reach $28-72 billion by 2035.
Expert consensus on the five-year transformation window
Leading quantum computing experts across multiple organizations align on a remarkably consistent timeline. IBM’s CEO predicts quantum advantage demonstrations by 2026, while Google targets useful quantum computers by 2029. Quantinuum’s roadmap promises universal fault-tolerant quantum computing by 2030. IonQ projects commercial quantum advantages in machine learning by 2027. This convergence suggests the 2025-2030 period will be as pivotal for quantum computing as 1995-2000 was for internet adoption.
The technical indicators support these projections. Current quantum systems achieve 99.9 per cent gate fidelity – crossing the threshold for practical applications. Multiple companies have demonstrated quantum advantages in specific domains: JPMorgan and Amazon reduced portfolio optimization problems by 80 per cent, while quantum-enhanced traffic optimization decreased Beijing congestion by 20 per cent. These proof-of-concept successes mirror the early internet’s transformative applications before widespread adoption.
Real-world quantum-AI applications emerging across industries
The most compelling evidence comes from actual deployments showing measurable improvements. Cleveland Clinic and IBM launched a dedicated healthcare quantum computer for protein interaction modeling in cancer research. Pfizer partnered with IBM for quantum molecular modeling in drug discovery. DHL optimized international shipping routes using quantum algorithms, reducing delivery times by 20 per cent.
These applications demonstrate quantum computing’s unique ability to solve problems that scale exponentially with classical approaches. Quantum systems process multiple possibilities simultaneously through superposition, enabling breakthrough capabilities in optimization, simulation, and machine learning that classical computers cannot replicate efficiently. The energy efficiency advantages are equally dramatic – quantum systems achieve 3-4 orders of magnitude better energy consumption for specific computational tasks.
The security imperative driving quantum adoption
Beyond performance advantages, quantum computing addresses critical security challenges that will force rapid adoption. Current encryption methods protecting AI systems will become vulnerable to quantum attacks within this decade. The US government has mandated federal agencies transition to quantum-safe cryptography, while NIST released new post-quantum encryption standards in 2024. Organizations face a “harvest now, decrypt later” threat where adversaries collect encrypted data today for future quantum decryption.
This security imperative creates unavoidable pressure for quantum adoption. Satellite-based quantum communication networks are already operational, with China’s quantum network spanning 12,000 kilometers and similar projects launching globally. The intersection of quantum security and AI protection will drive widespread infrastructure upgrades in the coming years.
Preparing for the quantum era transformation
The evidence overwhelmingly suggests we’re approaching a technological inflection point where quantum computing transitions from experimental curiosity to essential infrastructure. Just as businesses that failed to adapt to internet connectivity fell behind in the early 2000s, organizations that ignore quantum computing risk losing competitive advantage in the AI-driven economy.
The quantum revolution isn’t coming- it’s here. The next five to six years will determine which organizations successfully navigate this transition and which turn into casualties of technological change. AI systems must re-engineer themselves to leverage quantum capabilities, requiring new algorithms, architectures, and approaches that blend quantum and classical computing.
This represents more than incremental improvement; it’s a fundamental paradigm shift that will reshape how we approach computation, security, and artificial intelligence. The question isn’t whether quantum computing will transform AI – it’s whether we’ll be ready for the transformation.
(Krishna Kumar is a technology explorer & strategist based in Austin, Texas in the US. Rakshitha Reddy is AI developer based in Atlanta, US)
-
Mergers & Acquisitions1 week ago
Amazon weighs further investment in Anthropic to deepen AI alliance
-
Mergers & Acquisitions1 week ago
How Elon Musk’s rogue Grok chatbot became a cautionary AI tale
-
Mergers & Acquisitions1 week ago
UK crime agency arrests 4 people over cyber attacks on retailers
-
Brand Stories2 weeks ago
Voice AI Startup ElevenLabs Plans to Add Hubs Around the World
-
Asia Travel Pulse2 weeks ago
Looking For Adventure In Asia? Here Are 7 Epic Destinations You Need To Experience At Least Once – Zee News
-
AI in Travel2 weeks ago
‘Will AI take my job?’ A trip to a Beijing fortune-telling bar to see what lies ahead | China
-
Mergers & Acquisitions1 week ago
EU pushes ahead with AI code of practice
-
Mergers & Acquisitions2 weeks ago
ChatGPT — the last of the great romantics
-
Mergers & Acquisitions1 week ago
Humans must remain at the heart of the AI story
-
The Travel Revolution of Our Era1 month ago
CheQin.ai Redefines Hotel Booking with Zero-Commission Model
You must be logged in to post a comment Login