Connect with us

Brand Stories

Artificial intelligence put to work on extension

Published

on


Farm Credit Canada, Results Driven Agriculture Research unveil AI tool designed to produce ‘timely advice’ for farmers.

WESTERN PRODUCER – AI extension has arrived in Canada.

Farm Credit Canada and Results Driven Agriculture Research (RDAR) have unveiled a generative artificial intelligence tool that will deliver “timely advice (that) producers can use immediately.”

The tool is called Root.

FCC says it will help farmers adopt best practices, right from their phones.

“Root is more than a technology solution, it’s part of a broader effort to bring back something Canadian agriculture has lost: accessible, trusted and timely insight,” Justine Hendricks, FCC president and chief executive, said in a release.

“With the decline of local advisory networks (extension services), too many farmers and ranchers have had to rely on fragmented information or go at it alone. By partnering with RDAR, we’re helping producers access the kind of expertise that once came from decades of community-based knowledge sharing.”

Many agronomists, livestock specialists and extension experts would take issue with the idea that farmers no longer have trusted and timely advice.

Nonetheless, it is correct to say that government cutbacks have reduced extension services. There are fewer people on the Prairies that provide unbiased and relevant information to producers.

There was a time, maybe 30 to 40 years ago, when provincial government reps were the clear-cut leaders of ag extension across Canada.

Provincial agriculture departments still employ specialists in regional offices, who are responsible for delivering the latest research and best information to livestock and crop producers.

But the number of provincial extension specialists has shrunk.

In some provinces, they have almost disappeared.

In October 2020, the Western Producer reported that the Alberta government had laid off about 135 Alberta Agriculture employees who worked in primary agriculture. That included research and extension staff.

“People always forget that Alberta Agriculture had offices across the province and there was a lot of co-operative work that was done,” said Ross McKenzie, a retired department employee.

“That capacity will be lost. You’ll see (applied research) groups … kind of pick up and carry on, but you won’t have that co-ordinated effort across the province that we had.”

Root might fill some of the void that exists in agricultural extension.

It was actually launched earlier this year and has already “supported” more than 2,900 conversations about farm management, including troubleshooting for problems with machinery, FCC said.

Being an AI tool, Root can gather information and learn from the latest agricultural results from research done in Canada and elsewhere.

“We are especially keen on incorporating RDAR (research) materials into Root … making our materials accessible to producers and ranchers,” said Mark Redmond, RDAR’s chief executive officer.

“We are pleased to formalize our partnership with FCC; in the past, we have worked on initiatives concurrently, but now we will collaborate more closely.”

For years, commodity groups for grains, oilseeds, pulses and livestock have used podcasts, webinars, YouTube videos, Twitter (X) and other technologies to share the best information with their members.

The new AI tool could be helpful for producers, but some extension experts still believe personal relationships matter.

Tracy Herbert, the knowledge mobilization and communication director with the Beef Cattle Research Council, said those modern tools can be effective, but personable relationships are critical when it comes to adoption of new agricultural practices.

“Without someone you have a trusted relationship with, who can provide that customized guidancev… it’s far less likely that you’ll get to the last step in that process (adoption).”

 

About the author

Robert Arnason

Reporter

Robert Arnason is a reporter with The Western Producer and Glacier Farm Media. Since 2008, he has authored nearly 5,000 articles on anything and everything related to Canadian agriculture. He didn’t grow up on a farm, but Robert spent hundreds of days on his uncle’s cattle and grain farm in Manitoba. Robert started his journalism career in Winnipeg as a freelancer, then worked as a reporter and editor at newspapers in Nipawin, Saskatchewan and Fernie, BC. Robert has a degree in civil engineering from the University of Manitoba and a diploma in LSJF – Long Suffering Jets’ Fan.

Related Coverage


Saskatchewan Pulse Growers reduces check-off levy

Get the most from fusarium head blight risk maps

Saskatchewan gets new fababean breeding program

India unveils ambitious pulse plan

Industry believes green pea supplies are over-reported

Getting greener: How Prairie cereals have reduced their carbon footprint





Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Brand Stories

Studying a galaxy far, far away could become easier with help from AI, says researcher

Published

on

By


Youssef Zaazou graduated with a master’s of science from the Memorial University of Newfoundland in 2025. (Memorial University/Richard Blenkinsopp)

A recent Memorial University of Newfoundland graduate says his research may help study galaxies more efficiently — with help from Artificial Intelligence.

As part of Youssef Zaazou’s master’s of science, he developed an AI-based image-processing technique that generates predictions of what certain galaxies may look like in a given wavelength of light.

“Think of it as translating galaxy images across different wavelengths of light,” Zaazou told CBC News over email.

He did this by researching past methods for similar tasks, adapting current AI tools for his specific purposes, finding and curating the right dataset to train the models, along with plenty of trial and error.

“Instead of … having to look at an entire region of sky, we can get predictions for certain regions and figure out, ‘Oh this might be interesting to look at,'” said Zaazou. “So we can then prioritize how we use our telescope resources.”

An excerpt from Zaazou's research showing green light inputs to the model, outputs of the model in red light, the true value of the red light the model aims to replicate, and the difference between rows two and three.

Zaazou developed an AI-based image-processing technique that generates predictions of what certain galaxies may look like in a given wavelength of light. (Submitted by Youssef Zaazou)

Zaazou recently teamed up with his supervisors Terrence Tricco and Alex Bihlo to co-author a paper on his research in The Astrophysical Journal, which is published by The American Astronomical Society.

Tricco says this research could also help justify allocation of high-demand telescopes like the Hubble Space Telescope, which has a competitive process to assign its use.

A future for AI in astronomy

Both Tricco and Zaazou emphasised the research does not use AI to replace current methods but to augment them.

Tricco says that Zaazou’s findings have the potential to help guide future telescope development, and predict what astronomers might expect to see, making for more efficient exploration.

Calling The Astrophysical Journal the “gold standard” for astronomy journals in the world, Tricco hopes the wider astronomical community will take notice of Zaazou’s findings.

“We want to have them be aware of this because as I was mentioning, AI, machine learning, and physics, astronomy, it’s still very new for physicists and for astronomers, and they’re a little bit hesitant about these tools,” said Tricco.

Terrence Tricco is an Assistant Professor in the Department of Computer Science at the Memorial University of Newfoundland.

Terrence Tricco, an assistant professor at MUN’s Department of Computer Science , says Zaazou’s findings have the potential to help guide future telescope development. (Submitted by Terrence Tricco )

Tricco praised the growing presence of space research in general at Memorial University.

“We are here, we’re doing great research,” he said.

He added growing AI expertise is also transferable to other disciplines.

“I think that builds into our just tech ecosystem here as well.”

‘Only the beginning’

Though Zaazou’s time as a Memorial University student is over, he hopes to see research in this area continue to grow.

“I’m hoping this is the beginning of further research to be done,” he said.

Though Zaazou described his contribution to the field as merely a “pebble,” he’s happy to have been able to do his part.

“I’m an astronomer. And it just feels great to be able to say that and to be able to have that little contribution because I just love the field and I’m fascinated by everything out there,” said Zaazou.

Download our free CBC News app to sign up for push alerts for CBC Newfoundland and Labrador. Sign up for our daily headlines newsletter here. Click here to visit our landing page.



Source link

Continue Reading

Brand Stories

‘You can make really good stuff – fast’: new AI tools a gamechanger for film-makers | Artificial intelligence (AI)

Published

on


A US stealth bomber flies across a darkening sky towards Iran. Meanwhile, in Tehran a solitary woman feeds stray cats amid rubble from recent Israeli airstrikes.

To the uninitiated viewer, this could be a cinematic retelling of a geopolitical crisis that unfolded barely weeks ago – hastily shot on location, somewhere in the Middle East.

However, despite its polished production look, it wasn’t shot anywhere, there is no location, and the woman feeding stray cats is no actor – she doesn’t exist.

Midnight Drop, an AI film depicting US-Israeli bombings in Iran

The engrossing footage is the “rough cut” of a 12-minute short film about last month’s US attack on Iranian nuclear sites, made by the directors Samir Mallal and Bouha Kazmi. It is also made entirely by artificial intelligence.

The clip is based on a detail the film-makers read in news coverage of the US bombings – a woman who walked the empty streets of Tehran feeding stray cats. Armed with the information, they have been able to make a sequence that looks as if it could have been created by a Hollywood director.

The impressive speed and, for some, worrying ease with which films of this kind can be made has not been lost on broadcasting experts.

Last week Richard Osman, the TV producer and bestselling author, said that an era of entertainment industry history had ended and a new one had begun – all because Google has released a new AI video making tool used by Mallal and others.

A still from Midnight Drop, showing the woman who feeds stray cats in Tehran in the dead of night. Photograph: Oneday Studios

“So I saw this thing and I thought, ‘well, OK that’s the end of one part of entertainment history and the beginning of another’,” he said on The Rest is Entertainment podcast.

Osman added: “TikTok, ads, trailers – anything like that – I will say will be majority AI-assisted by 2027.”

For Mallal, a award-winning London-based documentary maker who has made adverts for Samsung and Coca-Cola, AI has provided him with a new format – “cinematic news”.

The Tehran film, called Midnight Drop, is a follow-up to Spiders in the Sky, a recreation of a Ukrainian drone attack on Russian bombers in June.

Within two weeks, Mallal, who directed Spiders in the Sky on his own, was able to make a film about the Ukraine attack that would have cost millions – and would have taken at least two years including development – to make pre-AI.

“Using AI, it should be possible to make things that we’ve never seen before,” he said. “We’ve never seen a cinematic news piece before turned around in two weeks. We’ve never seen a thriller based on the news made in two weeks.”

Spiders in the Sky was largely made with Veo3, an AI video generation model developed by Google, and other AI tools. The voiceover, script and music were not created by AI, although ChatGPT helped Mallal edit a lengthy interview with a drone operator that formed the film’s narrative spine.

Film-maker recreates Ukrainian drone attack on Russia using AI in Spiders in the Sky

Google’s film-making tool, Flow, is powered by Veo3. It also creates speech, sound effects and background noise. Since its release in May, the impact of the tool on YouTube – also owned by Google – and social media in general has been marked. As Marina Hyde, Osman’s podcast partner, said last week: “The proliferation is extraordinary.”

Quite a lot of it is “slop” – the term for AI-generated nonsense – although the Olympic diving dogs have a compelling quality.

Mallal and Kazmi aim to complete the film, which will intercut the Iranian’s story with the stealth bomber mission and will be six times the length of Spider’s two minutes, in August. It is being made by a mix of models including Veo3, OpenAI’s Sora and Midjourney.

“I’m trying to prove a point,” says Mallal. “Which is that you can make really good stuff at a high level – but fast, at the speed of culture. Hollywood, especially, moves incredibly slowly.”

skip past newsletter promotion

Spiders in the Sky, an AI film directed by Samir Mallal, tells the story of Ukraine’s drone attacks on Russian airfields. Photograph: Oneday Studios

He adds: “The creative process is all about making bad stuff to get to the good stuff. We have the best bad ideas faster. But the process is accelerated with AI.”

Mallal and Kazmi also recently made Atlas, Interrupted, a short film about the 3I/Atlas comet, another recent news event, that has appeared on the BBC.

David Jones, the chief executive of Brandtech Group, an advertising startup using generative AI – the term for tools such as chatbots and video generators – to create marketing campaigns, says the advertising world is about to undergo a revolution due to models such as Veo3.

“Today, less than 1% of all brand content is created using gen AI. It will be 100% that is fully or partly created using gen AI,” he says.

Netflix also revealed last week that it used AI in one of its TV shows for the first time.

A Ukrainian drone homes in on its target in Spiders in the Sky. Photograph: Oneday Studios

However, in the background of this latest surge in AI-spurred creativity lies the issue of copyright. In the UK, the creative industries are furious about government proposals to let models be trained on copyright-protected work without seeking the owner’s permission – unless the owner opts out of the process.

Mallal says he wants to see a “broadly accessible and easy-to-use programme where artists are compensated for their work”.

Beeban Kidron, a cross-bench peer and leading campaigner against the government proposals, says AI film-making tools are “fantastic” but “at what point are they going to realise that these tools are literally built on the work of creators?” She adds: “Creators need equity in the new system or we lose something precious.”

YouTube says its terms and conditions allow Google to use creators’ work for making AI models – and denies that all of YouTube’s inventory has been used to train its models.

Mallal calls his use of AI to make films “prompt craft”, a phrase that uses the term for giving instructions to AI systems. When making the Ukraine film, he says he was amazed at how quickly a camera angle or lighting tone could be adjusted with a few taps on a keyboard.

“I’m deep into AI. I’ve learned how to prompt engineer. I’ve learned how to translate my skills as a director into prompting. But I’ve never produced anything creative from that. Then Veo3 comes out, and I said, ‘OK, finally, we’re here.’”



Source link

Continue Reading

Brand Stories

AI’s next leap demands a computing revolution

Published

on


We stand at a technological crossroads remarkably similar to the early 2000s, when the internet’s explosive growth outpaced existing infrastructure capabilities. Just as dial-up connections couldn’t support the emerging digital economy, today’s classical computing systems are hitting fundamental limits that will constrain AI’s continued evolution. The solution lies in quantum computing – and the next five to six years will determine whether we successfully navigate this crucial transition.

The computational ceiling blocking AI advancement

Current AI systems face insurmountable mathematical barriers that mirror the bandwidth bottlenecks of early internet infrastructure. Training large language models like GPT-3 consumes 1,300 megawatt-hours of electricity, while classical optimization problems require exponentially increasing computational resources. Google’s recent demonstration starkly illustrates this divide: their Willow quantum processor completed calculations in five minutes that would take classical supercomputers 10 septillion years – while consuming 30,000 times less energy.

The parallels to early 2000s telecommunications are striking. Then, streaming video, cloud computing, and e-commerce demanded faster data speeds that existing infrastructure couldn’t provide. Today, AI applications like real-time molecular simulation, financial risk optimization, and large-scale pattern recognition are pushing against the physical limits of classical computing architectures. Just as the internet required fiber optic cables and broadband infrastructure, AI’s next phase demands quantum computational capabilities.

Breakthrough momentum accelerating toward mainstream adoption

The quantum computing landscape has undergone transformative changes in 2024-2025 that signal mainstream viability. Google’s Willow chip achieved below-threshold error correction – a critical milestone where quantum systems become more accurate as they scale up. IBM’s roadmap targets 200 logical qubits by 2029, while Microsoft’s topological qubit breakthrough promises inherent error resistance. These aren’t incremental improvements; they represent fundamental advances that make practical quantum-AI systems feasible.

Industry investments reflect this transition from research to commercial reality. Quantum startups raised $2 billion in 2024, representing a 138 per cent increase from the previous year. Major corporations are backing this confidence with substantial commitments: IBM’s $30 billion quantum R&D investment, Microsoft’s quantum-ready initiative for 2025, and Google’s $5 million quantum applications prize. The market consensus projects quantum computing revenue will exceed $1 billion in 2025 and reach $28-72 billion by 2035.

Expert consensus on the five-year transformation window

Leading quantum computing experts across multiple organizations align on a remarkably consistent timeline. IBM’s CEO predicts quantum advantage demonstrations by 2026, while Google targets useful quantum computers by 2029. Quantinuum’s roadmap promises universal fault-tolerant quantum computing by 2030. IonQ projects commercial quantum advantages in machine learning by 2027. This convergence suggests the 2025-2030 period will be as pivotal for quantum computing as 1995-2000 was for internet adoption.

The technical indicators support these projections. Current quantum systems achieve 99.9 per cent gate fidelity – crossing the threshold for practical applications. Multiple companies have demonstrated quantum advantages in specific domains: JPMorgan and Amazon reduced portfolio optimization problems by 80 per cent, while quantum-enhanced traffic optimization decreased Beijing congestion by 20 per cent. These proof-of-concept successes mirror the early internet’s transformative applications before widespread adoption.

Real-world quantum-AI applications emerging across industries

The most compelling evidence comes from actual deployments showing measurable improvements. Cleveland Clinic and IBM launched a dedicated healthcare quantum computer for protein interaction modeling in cancer research. Pfizer partnered with IBM for quantum molecular modeling in drug discovery. DHL optimized international shipping routes using quantum algorithms, reducing delivery times by 20 per cent.

These applications demonstrate quantum computing’s unique ability to solve problems that scale exponentially with classical approaches. Quantum systems process multiple possibilities simultaneously through superposition, enabling breakthrough capabilities in optimization, simulation, and machine learning that classical computers cannot replicate efficiently. The energy efficiency advantages are equally dramatic – quantum systems achieve 3-4 orders of magnitude better energy consumption for specific computational tasks.

The security imperative driving quantum adoption

Beyond performance advantages, quantum computing addresses critical security challenges that will force rapid adoption. Current encryption methods protecting AI systems will become vulnerable to quantum attacks within this decade. The US government has mandated federal agencies transition to quantum-safe cryptography, while NIST released new post-quantum encryption standards in 2024. Organizations face a “harvest now, decrypt later” threat where adversaries collect encrypted data today for future quantum decryption.

This security imperative creates unavoidable pressure for quantum adoption. Satellite-based quantum communication networks are already operational, with China’s quantum network spanning 12,000 kilometers and similar projects launching globally. The intersection of quantum security and AI protection will drive widespread infrastructure upgrades in the coming years.

Preparing for the quantum era transformation

The evidence overwhelmingly suggests we’re approaching a technological inflection point where quantum computing transitions from experimental curiosity to essential infrastructure. Just as businesses that failed to adapt to internet connectivity fell behind in the early 2000s, organizations that ignore quantum computing risk losing competitive advantage in the AI-driven economy.

The quantum revolution isn’t coming- it’s here. The next five to six years will determine which organizations successfully navigate this transition and which turn into casualties of technological change. AI systems must re-engineer themselves to leverage quantum capabilities, requiring new algorithms, architectures, and approaches that blend quantum and classical computing.

This represents more than incremental improvement; it’s a fundamental paradigm shift that will reshape how we approach computation, security, and artificial intelligence. The question isn’t whether quantum computing will transform AI – it’s whether we’ll be ready for the transformation.

(Krishna Kumar is a technology explorer & strategist based in Austin, Texas in the US. Rakshitha Reddy is AI developer based in Atlanta, US)



Source link

Continue Reading

Trending

Copyright © 2025 AISTORIZ. For enquiries email at prompt@travelstoriz.com