Brand Stories
Artificial Intelligence in Asset and Wealth Management
Artificial intelligence (AI) has moved with lightning speed from a buzzword to a boardroom priority in the past three years. Asset managers and family wealth advisors have traveled a long way from their early experiments with ChatGPT. They’re now beginning to realize the potential of AI to enhance investment decisions, automate operations and deliver personalized client experiences. But AI opportunities come with unique risks, especially when it comes to data privacy and security, as well as regulatory and legal compliance in a fast moving and fast changing landscape.
Let’s break down the fundamentals of AI and explore some of the most critical risks, offering three primary guideposts for responsible AI adoption and risk management.
Key AI…
Brand Stories
New York Enacts Artificial Intelligence Companion Mental Health L
Key Takeaways:
- New York is the first state to enact mental health-focused statutory provisions for “AI Companions,” requiring user disclosures and suicide prevention measures for emotionally interactive AI systems.
- Other states are exploring similar approaches, with laws targeting compulsive use, requiring suicide prevention protocols or mandating user awareness of AI-human distinctions.
- Organizations must assess their AI risk to ensure compliance with the myriad laws and statutory provisions governing AI systems.
New York, as part of its state budget process, enacted in May 2025 new statutory provisions for “AI Companions” that highlight an emerging desire to monitor and safeguard the mental health of AI tool or system users. It aligns with a broader regulatory awareness of the mental health risks involved in AI interactions and the desire to safeguard vulnerable AI users, particularly minors or those experiencing mental health crises like suicidal ideation.
An Emerging Desire to Safeguard Mental Health in an AI-Enabled World
Regulators are increasingly aware of the mental health risks involved in AI interactions and seeking ways to safeguard vulnerable users. These risks were brought into sharp focus with the death of a 14-year-old Florida teenager, Sewell Setzer, who committed suicide after forming a romantic and emotional relationship with an AI chatbot and allegedly informing the chatbot that he was thinking about suicide, which has resulted in a closely watched lawsuit regarding the chatbot’s role in his death.
States have considered a variety of techniques to regulate this space, ranging from user disclosures to safety measures. Utah’s law on mental health chatbots (H.B. 452), for example, imposes advertisement restrictions and requires certain disclosures to ensure users are aware they are interacting with an AI rather than a human being. Other states, like California (via SB 243), are considering design mandates like banning reward systems that encourage compulsive use and requiring suicide prevention measures within any AI chatbots that are being marketed as emotional buddies. Currently, NY is the only state that has enacted safety-focused measures (like suicide prevention) around AI companionship.
NY’s Approach to Embedding Mental Health Safeguards in AI
NY’s new statutory provisions (which go into effect on November 5, 2025) focus on AI systems that retain user information and preferences from prior interactions to engage in human-like conversation with their users.
These systems, termed “AI Companions,” are characterized by their ability to sustain ongoing conversations about personal matters, including topics typically found in friendships or emotionally supportive interactions. That means chatbots, digital wellness tools, mental health apps or even productivity assistants with emotionally aware features could fall within the scope of AI Companions depending on how they interact with users, although interactive AI systems used strictly for customer service, international operations, research and/or productivity optimization are excluded.
The law seeks to drive consumer awareness and prevent suicide and other forms of self-harm by mandating such AI systems (1) affirmatively notify users they are not interacting with a human and (2) take measures to prevent self-harm. Operators must provide clear and conspicuous notifications at the start of any interaction (and every three hours for long and ongoing interactions) to ensure users are aware they’re not interacting with a human. Operators must also ensure the AI system has reasonable protocols to detect suicidal ideation or expressions of self-harm expressed by a user and refer them to crisis service providers like the 988 Suicide Prevention and Behavioral Health Crisis Hotline whenever such expressions are detected.
Assessing AI Regulatory Risk
Whether in the context of chatbots, wellness apps, education platforms or AI-driven social tools, regulators are increasingly focused on systems that engage deeply with users. Because these systems may be uniquely positioned to detect warning signs like expressions of hopelessness, isolation or suicidal ideation, it’s likely that other states will follow NY in requiring certain AI systems to identify, respond to or otherwise escalate signals of mental health distress to protect vulnerable populations like minors.
NY’s new AI-related mental health provisions also showcase how U.S. laws and statutory provisions around AI heavily focus on how the technology is being used. In other words, your use case determines your risk. To effectively navigate the patchwork of AI-related laws and statutory provisions in the U.S. — of which there are over 100 state laws currently — organizations must evaluate each AI use case to identify their compliance risks and obligations.
Polsinelli offers an AI risk assessment that enables organizations to do exactly that. Understanding your AI risks is your first line of defense — and a powerful business enabler. Let us help you evaluate whether your AI use case falls within use case or industry-specific laws like NY’s “AI Companion” law or industry-agnostic ones like Colorado’s AI Act, so you can deploy innovative business tools and solutions with confidence.
Brand Stories
New cybersecurity, artificial intelligence degrees at Upper Iowa University can help protect, grow businesses
FAYETTE, Iowa (KCRG) – Five new degree programs at Upper Iowa University will help students get trained in artificial intelligence, cybersecurity, and business analytics.
While AI and cybersecurity may be foreign to Main Street business owners, Dubuque County IT Director Nathan Gilmore says investing in the fields is critical.
“It only takes one breach or it only takes one incident to potentially make them close up shop or wipe out years of profits,” says Gilmore.
Data breaches occur every 39 seconds, according to cybersecurity company SentinelOne.
“It is, at this point, in my opinion, no different than an electric bill or a water bill,” explains Gilmore. “It’s just part of doing business.”
While investments in cybersecurity can help protect companies, Gilmore says artificial intelligence can help business owners save time and money. AI can complete automated tasks, including billing, scheduling appointments, and answering questions for customers online, among others.
“It’s automated. You’re not using actual staff time. Those are the sorts of force multipliers you can use AI in a very positive fashion,” says Gilmore.
Gilmore says more trained workers are needed in both growing fields to address demand, and new degree programs at Upper Iowa University launching this fall will help fill the need.
“It was kind of a no-brainer for us,” shares Dr. Billie Cowley, Vice President for Academic Affairs at Upper Iowa University.
This fall, UIU in Fayette is launching the following:
- Bachelor of Science in Cybersecurity
- Bachelor of Science in Business Analytics
- Master of Business Administration, Cybersecurity
- Master of Business Administration, Artificial Intelligence (AI)
- Master of Public Administration, Cybersecurity
“It’s extremely exciting. There will be a pool of knowledgeable, trained people who will be able to serve this Upper Midwest region,” says Gilmore. “Yes, a lot of this stuff can be none remotely. No question, it can be, but Main Street is also a very face-to-face type world. They want to talk to somebody.”
Cowley says she’s seen firsthand the rate at which AI is evolving.
“We’ve done some AI training with faculty, and what we learn in the fall is now different than what we know in the spring,” says Cowley.
To match, Cowley says the programs are designed to shift to match how these fields evolve.
“That is massive because this is not a static industry,” says Gilmore. “If those programs are set up to incorporate the changes that are here and coming, that is a massive boom for these programs because this stuff is changing monthly.”
Cowley says there’s no limit to the number of students Upper Iowa will enroll in these programs. Instead, enrollment will be based on demand, and staff will be hired, as needed.
“Upper Iowa is like home to me and my husband,” shares Cowley. “To be apart and see this growth, there’s no words to describe what this means.”
More information about UIU’s new offerings can be found at GO.UIU.EDU/FutureReady.
Copyright 2025 KCRG. All rights reserved.
Brand Stories
Before winning reelection bid, DC Council member Trayon White apparently used AI political ad
Before winning back his D.C. Council seat in a special election this week, Trayon White posted a video rallying voters that viewers quickly sniffed out as artificial intelligence.
Before winning back his D.C. Council seat in a special election this week, Trayon White posted a video rallying voters that viewers quickly sniffed out as artificial intelligence.
The video shows a figure whose appearance and voice are robotic. The logo for the AI company is on the bottom right corner of the screen of the Instagram post shared Monday, the last day of early voting.
“They hope we stay home,” the female voice said as it urged voters to head to the polls. “They hope we stay silent, but the truth is, no one is coming to save us but us. We have the power to shape the now and it’s time we use it.”
White won the reelection Tuesday.
Ward 6 Council member Charles Allen recently reintroduced legislation to regulate campaign advertisements like White’s recent video.
“There was no disclosure or transparency in that ad,” Allen told WTOP’s Jessica Kronzer. “And that’s what we’ve seen on a lot of AI generated ads.”
The legislation was brought forward days before White’s post; Allen said it would require advertisements generated by AI to have a label. Such ads would be banned altogether 90 days before an election.
“It’s perfectly OK for campaigns and for candidates to contrast themselves as much as they want with other people on the ballot, but we do expect people to tell the truth about who is speaking, and this just helps make sure that happens,” Allen said.
White’s video was first flagged by 51st News journalist Martin Austermule on X, who posted a video using the same avatar that appeared in the campaign’s advertisement.
But White is far from being alone in using generative artificial intelligence for political purposes.
Generative AI is already being used by campaigns to write fundraising emails, draft speeches and in some instances, avatars are even making calls to voters.
“The AI in politics — toothpaste isn’t just out of the tube,” said Peter Loge, the director of the School of Media and Public Affairs at the George Washington University. “It’s going to happy hour and taking selfies with the candidates.”
WTOP has reached out to White for comment.
Can legislation regulate campaigns use of AI?
Other states have passed measures aimed at making generative AI use in political campaigns more transparent through disclaimers or banning it altogether.
Allen is hoping Maryland and Virginia will adopt similar legislation to his proposal.
But Loge said regulating the practice is a challenge.
“Laws aren’t a bad thing. Regulations aren’t a bad thing, but they have to be enforceable,” he said. “They can’t be easily skirted. And what political campaign professionals have proven again and again is they can skirt almost anything.”
Costs of AI use for political campaigns
Loge has been studying AI for years and teaches courses on political communication ethics. He said some of the ethical issues presented by AI are age-old.
“People didn’t start lying in politics with the introduction of AI. Politics in America wasn’t puppies and rainbows until social media, then suddenly the wheels came off,” Loge said. “What AI does is allow us to do what we’ve always done, but louder, more faster, with greater impact.”
He gave an example of the 18th century artists with the Hudson River School who were tasked with painting the great American landscapes that later inspired the national parks and Western expansion. Historians believe those painters exaggerated what they saw in their artwork.
“We’ve had deepfakes in oils since the 1800s … generative AI makes it easier and faster to do that,” Loge said. “That’s arguably a bad thing.”
In 2023, during a race for the Republican Party’s nomination for president, Florida Gov. Ron DeSantis’ campaign shared an image that appeared to be fake where President Donald Trump was hugging Anthony Fauci. The campaign criticized Trump’s alleged support of Fauci.
AI can make producing content easier for campaigns. Loge said it will add to the overload of political noise voters already face.
“It’s going to make the goop, which feels like political campaign rhetoric, even goopier,” Loge said. “There’ll be more stuff coming at voters faster and at greater volume.”
Benefits of AI use for political campaigns
Supporters of AI say it could be used to make campaigns more efficient by streamlining communication between volunteers, staff and others, Loge said.
Running a campaign can be expensive, and Loge said AI could be used by candidates to avoid hiring staff or consultants.
“It lowers the bar to entry,” he said. “It allows more people to participate in politics, which is arguably a good thing.”
Of course, if campaigns are using AI to do work previously done by staff, it could cost human jobs.
“You’re going to be replacing interns and junior staff who used to write press releases and fundraising emails with computer programs that’ll be writing those things,” Loge said.
But humans could be part of the solution to issues presented by AI. As the technology continues to improve, Loge said volunteers and staff will become increasingly important to campaigns.
He said voters will likely be looking to talk with neighbors, volunteers and other people to sort out what’s real or fake.
“This actually makes politics, ironically, more human, not less, because it’d be a greater need for human connection and campaigns than ever before,” he said.
Get breaking news and daily headlines delivered to your email inbox by signing up here.
© 2025 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.
-
The Travel Revolution of Our Era3 weeks ago
‘AI is undeniably reshaping the core structure of the hospitality ecosystem’: Venu G Somineni
-
Brand Stories1 week ago
The Smart Way to Stay: How CheQin.AI Is Flipping Hotel Booking in Your Favor
-
Mergers & Acquisitions1 week ago
Amazon weighs further investment in Anthropic to deepen AI alliance
-
Mergers & Acquisitions1 week ago
How Elon Musk’s rogue Grok chatbot became a cautionary AI tale
-
Brand Stories2 weeks ago
Voice AI Startup ElevenLabs Plans to Add Hubs Around the World
-
Asia Travel Pulse2 weeks ago
Looking For Adventure In Asia? Here Are 7 Epic Destinations You Need To Experience At Least Once – Zee News
-
Mergers & Acquisitions1 week ago
UK crime agency arrests 4 people over cyber attacks on retailers
-
AI in Travel2 weeks ago
‘Will AI take my job?’ A trip to a Beijing fortune-telling bar to see what lies ahead | China
-
Mergers & Acquisitions2 weeks ago
ChatGPT — the last of the great romantics
-
Mergers & Acquisitions1 week ago
EU pushes ahead with AI code of practice
You must be logged in to post a comment Login