Connect with us

Brand Stories

Woman conned out of $15K after AI clones daughter’s voice

Published

on


A Florida woman says she was conned out of $15,000 by a scammer who used artificial intelligence to replicate her daughter’s voice.

Sharon Brightwell, who lives outside of Tampa, told WFLA that she was targeted by scammers last Wednesday after receiving a call from a number that appeared to belong to her daughter, April Munroe. When she picked up, Brightwell heard her daughter’s hysterical voice claiming she had hit a pregnant woman with her car while texting and driving.

“There is nobody that could convince me that it was not [her voice],” Brightwell told WFLA. “I know my daughter’s cry, even though she’s an adult, I still know my daughter’s cry.”

Monroe was in Carrollwood, a nearby suburb at the time, she wrote on a GoFundMe page to help recoup her mother’s money.

“My voice was AI cloned and sounded exactly like me,” Monroe wrote. “After you hear your child in distress, all logic is out the window.”

A Florida woman says con artists extorted $15,000 from her by using artificial intelligence to recreate her daughter's voice

A Florida woman says con artists extorted $15,000 from her by using artificial intelligence to recreate her daughter’s voice (Getty Images)

A man then took the phone and claimed to be Monroe’s attorney. He told Brightwell he needed $15,000 cash to pay her bail. She couldn’t tell the bank what it was for or else her daughter’s credit would be affected, the man said.

“He says, ‘Can you do that?’ I said ‘Not really, but yes,’” Brightwell told WFLA. “I’ll do whatever I have to do for my daughter.”

Brightwell withdrew the money from her bank and put it inside a box, which she then gave to a driver who showed up at her house.

Soon after, Brightwell received another call from someone claiming to be a relative of the pregnant woman her daughter supposedly hit. They said the woman’s unborn baby had been killed in the wreck and they wanted $30,000 cash, or else they’d sue.

Sharon Brightwell said the scammers told her to withdraw $15,000, but said she couldn't tell her bank why

Sharon Brightwell said the scammers told her to withdraw $15,000, but said she couldn’t tell her bank why (Getty Images)

Monroe says her son was with Brightwell the whole time and was in “just as much panic and worry.” He realized it was a scam after Monroe texted him on her lunch break.

“Then it all came together,” Monroe wrote. “My mom and son were in absolute shock.”

Monroe immediately left to go be with her mother and son. Monroe’s son “hunched over to throw up” when he first saw her and realized she was safe, she said.

“To tell you the trauma that my mom and son went through that day makes me nauseous and has made me lose more faith in humanity,” Monroe wrote. “Evil is too nice a word for the kind of people that can do this.”

Brightwell and her family encourage people to take proactive steps to prevent scams, such as coming up with a “code word” to use in emergency situations, WFLA reports.

Monroe said she filed a police report, and an investigation is underway. The Independent has contacted the Hillsborough County Sheriff’s Department for comment.



Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Brand Stories

Virginia Is First State to Use Agentic AI for Regulatory Streamlining

Published

on

By


Virginia is launching a pilot program that will use artificial intelligence (AI) agents to streamline regulations — the first such effort in the country — and reinforce the state’s standing as a friendly place to do business.

Gov. Glenn Youngkin issued an executive order to deploy AI agents to review and streamline Virginia’s regulations. The tool will scan all regulations and guidance to identify areas where there are conflicts with the statute, as well as redundancies and complex and unclear language.

“We have made tremendous strides towards streamlining regulations and the regulatory process in the Commonwealth,” Youngkin said in a press release. “Using emergent artificial intelligence tools, we will push this effort further in order to continue our mission of unleashing Virginia’s economy in a way that benefits all of its citizens.”

The new executive order adds to two other 2022 orders, which had mandated Virginia agencies to streamline regulations by at least 25%.

To date, state agencies have already streamlined regulations by 26.8% on average and cut 48% of words in guidance documents.

The new executive order is expected to help agencies struggling to hit the 25% regulatory reduction goal and give a further boost to those that have already met or exceeded requirements. The goal is to ensure the streamlining is done “to the greatest extent possible,” according to the governor’s office.

See more: Tech Giants Seek 10-Year Freeze on State AI Rules

All States Now Have AI Bills or Laws

The launch comes as Congress removed a 10-year ban on state AI regulations that was part of President Donald Trump’s “One Big Beautiful Bill.”

At present, states are accelerating AI regulation. All 50 states plus D.C., Puerto Rico and the Virgin Islands introduced AI legislation in 2025, with more than half enacting measures covering areas such as algorithmic fairness, transparency and consumer protections, according to a blog post by the law firm Brownstein Hyatt Farber Schreck.

In California, major bills include SB 420, which will establish an AI bill of rights, and SB 243, which aims to protect minors from chatbot manipulations. There’s also AB 1018, which seeks to ensure AI systems exhibit fairness in housing and hiring decisions, according to Brownstein.

In New York, SB 6453 has passed both chambers to be the first state law to restrict “frontier” or advanced AI models, according to Brownstein. In Connecticut, SB 2 is a comprehensive AI bill that awaits final votes.

Texas, Colorado, Utah and Montana have already enacted AI laws, and uncertainty about their enforceability has been lifted, the law firm said.

Meanwhile, California’s Judicial Council is considering requiring all 65 courts to adopt policies governing generative AI use unless they ban it outright, according to Reuters. If adopted, it would be the largest court system in the country with an AI policy.

Other states where court systems already have an AI policy include Illinois, Delaware and Arizona. States considering adopting an AI policy for their courts include New York, Georgia and Connecticut.

Read more:



Source link

Continue Reading

Brand Stories

UK switches on AI supercomputer that will help spot sick cows and skin cancer | Artificial intelligence (AI)

Published

on


Britain’s new £225m national artificial intelligence supercomputer will be used to spot sick dairy cows in Somerset, improve the detection of skin cancer on brown skin and help create wearable AI assistants that could help riot police anticipate danger.

Scientists hope Isambard-AI – named after the 19th-century engineer of groundbreaking bridges and railways, Isambard Kingdom Brunel – will unleash a wave of AI-powered technological, medical and social breakthroughs by allowing academics and public bodies access to the kind of vast computing power previously the preserve of private tech companies.

The supercomputer was formally switched on in Bristol on Thursday by the secretary of state for science and technology, Peter Kyle, who said it gave the UK “the raw computational horsepower that will save lives, create jobs, and help us reach net zero-ambitions faster”.

The machine is fitted with 5,400 Nvidia “superchips” and sits inside a black metal cage topped with razor wire north of the city. It will consume almost £1m a month of mostly nuclear-powered electricity and will run 100,000 times faster than an average laptop.

Amid fierce international competition for computing power, it is the largest publicly acknowledged facility in the UK but will be the 11th fastest in the world behind those in the US, Japan, Germany, Italy, Finland and Switzerland. Elon Musk’s new xAI supercomputer in Tennessee already has 20 times its processing power, while Meta’s chief executive, Mark Zuckerberg, is planning a datacentre that “covers a significant part of the footprint of Manhattan”.

The investment is part of the government’s £2bn push to attain “AI sovereignty” so Britain does not have to rely on foreign processing chips to make AI-enabled research progress. But the switch-on could trigger new ethical dilemmas about how far AI should be allowed to steer policy on anything from the control of public protests to the breeding of animals.

One AI model under development by academics at the University of Bristol is an algorithm that learns from thousands of hours of footage on human motion, captured using wearable cameras. The idea is to try to predict how humans could move next. It could be applied to a wide range of scenarios, including enabling police to predict how crowds of protesters may behave, or predict accidents in an industrial setting such as a construction site.

Dima Damen, a professor of computer vision at the university, said based on patterns in the human behaviours a wearable camera was capturing in real time, the algorithm, trained by Isambard-AI, could even “give an early warning that in the next two minutes, something is likely to happen here”.

Damen added there were “huge ethical implications of AI” and it would be important to always know why a system made a decision.

“One of the fears of AI is that some people will own the technology and the knowhow and others won’t,” she said. “It’s our biggest duty as researchers to make sure that the data and the knowledge is available for everyone.”

Another AI model under development could detect early infections in cows. A herd in Somerset is being filmed around the clock to train a model to predict if an animal is in the early stages of mastitis, which affects milk production and is an animal welfare problem. The scientists at Bristol believe this could be possible based on detecting subtle shifts in cows’ social behaviour.

“The farmer obviously takes a great interest in their herd, but they don’t necessarily have the time to look at all of the cows in their herd continuously day in, day out, so the AI will be there to provide that view,” said Andrew Dowsey, a professor of health data science at the University of Bristol.

A third group of researchers are using the supercomputer to detect bias in the detection of skin cancer. James Pope, a senior lecturer in data science at the University of Bristol, has already run “quadrillions if not quintillions of computations” on Isambard to find that current phone apps to check moles and lesions for signs of cancer are performing better on lighter coloured skin. If confirmed with further testing, apps could be retuned to avoid bias.

“It would be quite difficult, and frankly impossible to do it with a traditional computer,” he said.



Source link

Continue Reading

Brand Stories

This AI Warps Live Video in Real Time

Published

on


Dean Leitersdorf introduces himself over Zoom, then types a prompt that makes me feel like I’ve just taken psychedelic mushrooms: “wild west, cosmic, Roman Empire, golden, underwater.” He feeds the words into an artificial intelligence model developed by his startup, Decart, which manipulates live video in real time.

“I have no idea what’s going to happen,” Leitersdorf says with a laugh, shortly before transforming into a bizarre, gold-tinged, subaquatic version of Julius Caesar in a poncho.

Leitersdorf already looks a bit wild—long hair tumbling down his back, a pen doing acrobatics in his fingers. As we talk, his onscreen image oscillates in surreal ways as the model tries to predict what each new frame should look like. Leitersdorf puts his hands over his face and is transformed with more feminine features. His pen jumps between different colors and shapes. He adds more prompts that take us to new psychedelic realms.

Decart’s video-to-video model, Mirage, is both an impressive feat of engineering and a sign of how AI might soon shake up the livestreaming industry. Tools like OpenAI’s Sora can conjure increasingly realistic video footage with a text prompt. Mirage now makes it possible to manipulate video in real time.

On Thursday, Decart is launching a website and app that will allow users to create their own videos and modify YouTube clips. The website offers several default themes including “anime,” “Dubai skyline,” “cyberpunk,” and “Versailles Palace.” During our interview, Leitersdorf uploads a clip of someone playing Fortnite and the scene transforms from the familiar Battle Royale world into a version set underwater.

Decart’s technology has big potential for gaming. In November 2024, the company demoed a game called Oasis that used a similar approach to Mirage to generate a playable Minecraft-like world on the fly. Users could move close to a texture and then zoom out again to produce new playable scenes inside the game.

Manipulating live scenes in real time is even more computationally taxing. Decart wrote low-level code to squeeze high-speed calculations out of Nvidia chips to achieve the feat. Mirage generates 20 frames per second at 768 × 432 resolution and a latency of 100 milliseconds per frame—good enough for a decent-quality TikTok clip.



Source link

Continue Reading

Trending

Copyright © 2025 AISTORIZ. For enquiries email at prompt@travelstoriz.com