Brand Stories
Congress is interested in expanding AI training for federal workers
Terry Gerton: Let’s start with kind of a big level-setting. As we talk about AI, from your perspective, what is sort of the current state of AI competency across the federal workforce?
John Pescatore: Well, I think it’s important for us to understand the current status of what is AI. There’s not even a good definition of what that is. Obviously, everybody’s using a lot of tools today to do internet searches with AI engines, and we hear about it in the press and politicians talk about it. It’s a very mature technology — it’s been around for over 20 years — but it’s really exploded here for a number of reasons. So it’s way overhyped. And if you think about the way bad guys try to fool us into falling for phishing attacks, they try to get a sense of urgency going — it’s got to move, you got to do something quickly — that’s sort of where we are with AI. So from the point of view of the state of competency, there’s not even a definition of what it means to be competent yet. So what we have to point out first is: What does it really mean when we talk about using AI, or buying AI, or protecting ourselves against AI? We’re still sort of in a definitional phase.
John Pescatore: In the federal government, probably two years ago, I think I did a briefing on Capitol Hill to the Senate lunchtime briefings we do to them on topics. So I’ve been sort of thinking about this for several years. SANS over that time period has put out three training courses focused on the key aspects of AI. So we’re starting to see the demand for that sort of training go up. So we know there’s a thirst to become competent. It’s only here very recently that you can even define what that means.
Terry Gerton: That’s a fair point. And AI is going to show up differently depending on the kind of work that an individual does in the federal government. But there is this new bill that’s been introduced in the House — the AI Training Extension Act of 2025. So to your last point, as there’s growing interest in training, what is this bill trying to accomplish?
John Pescatore: There had been previous draft bills that came out last year — maybe even 2024 — that were focused on the procurement of AI, and it had to do with training procurement officers and people involved in technology procurements, how to evaluate AI.
John Pescatore: This current draft legislation that just came out in June is aimed at broadening that to train IT people and end users and security people. That’s very key — the procurement side is important — but that wasn’t really focused on the real problems. When we look at it today — already in private industry, certainly in some spots in the federal government — there’s kind of three things we have to worry about with AI. One is bad guys using it against us. That gets a lot of press, and we spend a lot of time there. But the most important one is: How do we make sure if the mission side is rolling out use of AI that it’s done securely? That’s really the number one. And the final one is: How do we — security people and IT people — use AI to more efficiently do our jobs? In a time when it’s harder to hire new people — more harder than ever — how do we use AI tech tools to make our people more productive, to try to fill some of the gaps? But that first one — making sure we’re protecting our own mission and our company’s use of AI — is really where the most work and training needs to be done.
Terry Gerton: So what are the key aspects of that kind of training? What do people need to be focused on? What do we need to make sure that our federal employees have as basic AI skills?
John Pescatore: So it depends on the role of the person. So that’s how we sort of start our training out. We have one that’s a broad one for leaders and managers — what do they need to think about? So for example, governance of AI is very important. Who is in charge of deciding what data the AI engine ingests, and how is that data protected? A great example is Microsoft — this was early last year, over a year ago now — they had a security incident with their own use of AI, and Microsoft Azure Cloud led to a huge breach because they hadn’t thought through the governance of AI.
John Pescatore: So in that example, employees’ PCs — disk drives — were indexed into the AI engine. That included every one of their emails, their passwords. So it was not a failure of the technology. It was a failure of the governance and the definition of how it was to be used. So that’s the first one — governance. That’s the same in information security in general. We have to have governance in place before we can start coming up with policies, before we can implement controls to cause those policies to take place. So that’s number one.
John Pescatore: From the typical IT person’s point of view, who might be involved in an IT project, it’s understanding the basic concepts and what it really means. We tend to talk about AI as one blob. There’s many different types of AI — machine learning. A lot of what you’re seeing today is generational, but these queries and the ability to do fake pictures and voices. But there’s a lot of different uses of AI. And then the final thing after that is the cybersecurity people. There’s been AI in use in cybersecurity tools for really 20 years. It was called machine learning for a long time. Now we do have some options where security people can take advantage of AI tools to do some things — but not all that it’s claimed to be.
Terry Gerton: I’m speaking with John Pescatore. He’s the director of emerging security trends at the SANS Institute. Let’s think about how agencies would acquire this kind of training or do it themselves, as the General Services Administration is thinking about centralizing procurement. Should we think about the government buying a common training package for agencies? Should agencies think about specific training programs for particular skill sets or mission sets? What should they be thinking about as they begin to acquire training for their AI teams?
John Pescatore: I think the way the government’s gone about looking at cybersecurity training and IT training overall in general still holds. And it’s not to start by looking at the training — it’s to start looking at the roles. And that’s typically by defined job categories within the various frameworks, like the NICE framework and other government efforts that have that, where a role is defined and then what skills are needed for that role, and then how are those skills demonstrated and how are those skills acquired. So I might acquire those skills — maybe I worked 10 years in a field and I’ve never taken a training course, but I have those skills. Maybe I’m brand new, right out of college, I have a degree but I’ve never had any hands-on experience. I’m lacking some of those skills, or I may have others.
John Pescatore: So then you get to certification that says: How do we assess what skills the person does have? It’s not just a paper, resume exercise. And then what’s available to fill those skill gaps? And that’s where training comes in. So there might be a degree in computer science — might be able to fill some of it — but we know from many years of experience most computer science programs don’t necessarily teach people how to do things. They teach the concepts of how to talk about things. Then we might say for certain skills — operational skills — they need hands-on experience. That may involve training with lab-type hands-on environments. Others may be strictly concepts. So I think in AI we’ve seen definitely there’s a need for that concept-type training so that managers understand what this means, and similarly that people that are technical managers understand how to evaluate their own staff’s needs. And then there’s a lot of hands-on needs.
John Pescatore: We can look at the medical world, for example. What they saw over the years was, obviously, we need highly skilled doctors. But we also need people to operate MRI machines and CAT scan machines — and then understand the medical side of things, but also understand the technology side of things. Then we need people to evaluate what the technology is saying. And then finally, we feed that small number of experts. The same is true in the federal government for IT and IT security around.
Terry Gerton: What I hear you saying is there’s all kinds of different training. What an individual who’s preparing emails and PowerPoints needs in terms of AI training is different from someone who’s managing large databases or deploying cybersecurity. But through all of that, there is a focus on transparency and ethics. Where would you bring those two topics into the training planning structure?
John Pescatore: Well, in transparency and ethics, I would largely lump that under governance, because that’s what you have to think through when a program is going to start up and do something with AI — provide better health care to state and local tribal entities using AI tools to do things — all these things that many things have been talked about.
John Pescatore: That’s where you get into, from the start: Well now, how do we do this and protect any information that’s in there? How do we validate the output from a safety point of view? And then from an ethics point of view: How do we make sure the inputs to this result in ethical outputs? So we’ve learned that AI is very good at hallucinating. If AI says, “make something up,” another AI engine ingests that. And all of a sudden, all the AI engines think it’s true.
John Pescatore: So there’s safeguards that we’ve done from a safety point of — the old, I used to work for the Secret Service and we used to have people do bomb checks — make sure that we don’t have bombs around the properties. And you would take bomb training. And the old joke was: Read the manual first on how to defuse any bomb, and read it to the end, because it’ll say cut the blue wire after you cut the red wire. So it’s the same with AI. There’s some things — if you don’t think from end to end — quality and safety, then ethics and transparency are meaningless. But if you think about it from a procurement point of view — transparency on how do we know what’s going on on the inside of this thing? It says it’s using AI. What does that mean? We still need very definitional levels defined that have not yet been reached.
Copyright
© 2025 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Brand Stories
New York Enacts Artificial Intelligence Companion Mental Health L
Key Takeaways:
- New York is the first state to enact mental health-focused statutory provisions for “AI Companions,” requiring user disclosures and suicide prevention measures for emotionally interactive AI systems.
- Other states are exploring similar approaches, with laws targeting compulsive use, requiring suicide prevention protocols or mandating user awareness of AI-human distinctions.
- Organizations must assess their AI risk to ensure compliance with the myriad laws and statutory provisions governing AI systems.
New York, as part of its state budget process, enacted in May 2025 new statutory provisions for “AI Companions” that highlight an emerging desire to monitor and safeguard the mental health of AI tool or system users. It aligns with a broader regulatory awareness of the mental health risks involved in AI interactions and the desire to safeguard vulnerable AI users, particularly minors or those experiencing mental health crises like suicidal ideation.
An Emerging Desire to Safeguard Mental Health in an AI-Enabled World
Regulators are increasingly aware of the mental health risks involved in AI interactions and seeking ways to safeguard vulnerable users. These risks were brought into sharp focus with the death of a 14-year-old Florida teenager, Sewell Setzer, who committed suicide after forming a romantic and emotional relationship with an AI chatbot and allegedly informing the chatbot that he was thinking about suicide, which has resulted in a closely watched lawsuit regarding the chatbot’s role in his death.
States have considered a variety of techniques to regulate this space, ranging from user disclosures to safety measures. Utah’s law on mental health chatbots (H.B. 452), for example, imposes advertisement restrictions and requires certain disclosures to ensure users are aware they are interacting with an AI rather than a human being. Other states, like California (via SB 243), are considering design mandates like banning reward systems that encourage compulsive use and requiring suicide prevention measures within any AI chatbots that are being marketed as emotional buddies. Currently, NY is the only state that has enacted safety-focused measures (like suicide prevention) around AI companionship.
NY’s Approach to Embedding Mental Health Safeguards in AI
NY’s new statutory provisions (which go into effect on November 5, 2025) focus on AI systems that retain user information and preferences from prior interactions to engage in human-like conversation with their users.
These systems, termed “AI Companions,” are characterized by their ability to sustain ongoing conversations about personal matters, including topics typically found in friendships or emotionally supportive interactions. That means chatbots, digital wellness tools, mental health apps or even productivity assistants with emotionally aware features could fall within the scope of AI Companions depending on how they interact with users, although interactive AI systems used strictly for customer service, international operations, research and/or productivity optimization are excluded.
The law seeks to drive consumer awareness and prevent suicide and other forms of self-harm by mandating such AI systems (1) affirmatively notify users they are not interacting with a human and (2) take measures to prevent self-harm. Operators must provide clear and conspicuous notifications at the start of any interaction (and every three hours for long and ongoing interactions) to ensure users are aware they’re not interacting with a human. Operators must also ensure the AI system has reasonable protocols to detect suicidal ideation or expressions of self-harm expressed by a user and refer them to crisis service providers like the 988 Suicide Prevention and Behavioral Health Crisis Hotline whenever such expressions are detected.
Assessing AI Regulatory Risk
Whether in the context of chatbots, wellness apps, education platforms or AI-driven social tools, regulators are increasingly focused on systems that engage deeply with users. Because these systems may be uniquely positioned to detect warning signs like expressions of hopelessness, isolation or suicidal ideation, it’s likely that other states will follow NY in requiring certain AI systems to identify, respond to or otherwise escalate signals of mental health distress to protect vulnerable populations like minors.
NY’s new AI-related mental health provisions also showcase how U.S. laws and statutory provisions around AI heavily focus on how the technology is being used. In other words, your use case determines your risk. To effectively navigate the patchwork of AI-related laws and statutory provisions in the U.S. — of which there are over 100 state laws currently — organizations must evaluate each AI use case to identify their compliance risks and obligations.
Polsinelli offers an AI risk assessment that enables organizations to do exactly that. Understanding your AI risks is your first line of defense — and a powerful business enabler. Let us help you evaluate whether your AI use case falls within use case or industry-specific laws like NY’s “AI Companion” law or industry-agnostic ones like Colorado’s AI Act, so you can deploy innovative business tools and solutions with confidence.
Brand Stories
New cybersecurity, artificial intelligence degrees at Upper Iowa University can help protect, grow businesses
FAYETTE, Iowa (KCRG) – Five new degree programs at Upper Iowa University will help students get trained in artificial intelligence, cybersecurity, and business analytics.
While AI and cybersecurity may be foreign to Main Street business owners, Dubuque County IT Director Nathan Gilmore says investing in the fields is critical.
“It only takes one breach or it only takes one incident to potentially make them close up shop or wipe out years of profits,” says Gilmore.
Data breaches occur every 39 seconds, according to cybersecurity company SentinelOne.
“It is, at this point, in my opinion, no different than an electric bill or a water bill,” explains Gilmore. “It’s just part of doing business.”
While investments in cybersecurity can help protect companies, Gilmore says artificial intelligence can help business owners save time and money. AI can complete automated tasks, including billing, scheduling appointments, and answering questions for customers online, among others.
“It’s automated. You’re not using actual staff time. Those are the sorts of force multipliers you can use AI in a very positive fashion,” says Gilmore.
Gilmore says more trained workers are needed in both growing fields to address demand, and new degree programs at Upper Iowa University launching this fall will help fill the need.
“It was kind of a no-brainer for us,” shares Dr. Billie Cowley, Vice President for Academic Affairs at Upper Iowa University.
This fall, UIU in Fayette is launching the following:
- Bachelor of Science in Cybersecurity
- Bachelor of Science in Business Analytics
- Master of Business Administration, Cybersecurity
- Master of Business Administration, Artificial Intelligence (AI)
- Master of Public Administration, Cybersecurity
“It’s extremely exciting. There will be a pool of knowledgeable, trained people who will be able to serve this Upper Midwest region,” says Gilmore. “Yes, a lot of this stuff can be none remotely. No question, it can be, but Main Street is also a very face-to-face type world. They want to talk to somebody.”
Cowley says she’s seen firsthand the rate at which AI is evolving.
“We’ve done some AI training with faculty, and what we learn in the fall is now different than what we know in the spring,” says Cowley.
To match, Cowley says the programs are designed to shift to match how these fields evolve.
“That is massive because this is not a static industry,” says Gilmore. “If those programs are set up to incorporate the changes that are here and coming, that is a massive boom for these programs because this stuff is changing monthly.”
Cowley says there’s no limit to the number of students Upper Iowa will enroll in these programs. Instead, enrollment will be based on demand, and staff will be hired, as needed.
“Upper Iowa is like home to me and my husband,” shares Cowley. “To be apart and see this growth, there’s no words to describe what this means.”
More information about UIU’s new offerings can be found at GO.UIU.EDU/FutureReady.
Copyright 2025 KCRG. All rights reserved.
Brand Stories
Before winning reelection bid, DC Council member Trayon White apparently used AI political ad
Before winning back his D.C. Council seat in a special election this week, Trayon White posted a video rallying voters that viewers quickly sniffed out as artificial intelligence.
Before winning back his D.C. Council seat in a special election this week, Trayon White posted a video rallying voters that viewers quickly sniffed out as artificial intelligence.
The video shows a figure whose appearance and voice are robotic. The logo for the AI company is on the bottom right corner of the screen of the Instagram post shared Monday, the last day of early voting.
“They hope we stay home,” the female voice said as it urged voters to head to the polls. “They hope we stay silent, but the truth is, no one is coming to save us but us. We have the power to shape the now and it’s time we use it.”
White won the reelection Tuesday.
Ward 6 Council member Charles Allen recently reintroduced legislation to regulate campaign advertisements like White’s recent video.
“There was no disclosure or transparency in that ad,” Allen told WTOP’s Jessica Kronzer. “And that’s what we’ve seen on a lot of AI generated ads.”
The legislation was brought forward days before White’s post; Allen said it would require advertisements generated by AI to have a label. Such ads would be banned altogether 90 days before an election.
“It’s perfectly OK for campaigns and for candidates to contrast themselves as much as they want with other people on the ballot, but we do expect people to tell the truth about who is speaking, and this just helps make sure that happens,” Allen said.
White’s video was first flagged by 51st News journalist Martin Austermule on X, who posted a video using the same avatar that appeared in the campaign’s advertisement.
But White is far from being alone in using generative artificial intelligence for political purposes.
Generative AI is already being used by campaigns to write fundraising emails, draft speeches and in some instances, avatars are even making calls to voters.
“The AI in politics — toothpaste isn’t just out of the tube,” said Peter Loge, the director of the School of Media and Public Affairs at the George Washington University. “It’s going to happy hour and taking selfies with the candidates.”
WTOP has reached out to White for comment.
Can legislation regulate campaigns use of AI?
Other states have passed measures aimed at making generative AI use in political campaigns more transparent through disclaimers or banning it altogether.
Allen is hoping Maryland and Virginia will adopt similar legislation to his proposal.
But Loge said regulating the practice is a challenge.
“Laws aren’t a bad thing. Regulations aren’t a bad thing, but they have to be enforceable,” he said. “They can’t be easily skirted. And what political campaign professionals have proven again and again is they can skirt almost anything.”
Costs of AI use for political campaigns
Loge has been studying AI for years and teaches courses on political communication ethics. He said some of the ethical issues presented by AI are age-old.
“People didn’t start lying in politics with the introduction of AI. Politics in America wasn’t puppies and rainbows until social media, then suddenly the wheels came off,” Loge said. “What AI does is allow us to do what we’ve always done, but louder, more faster, with greater impact.”
He gave an example of the 18th century artists with the Hudson River School who were tasked with painting the great American landscapes that later inspired the national parks and Western expansion. Historians believe those painters exaggerated what they saw in their artwork.
“We’ve had deepfakes in oils since the 1800s … generative AI makes it easier and faster to do that,” Loge said. “That’s arguably a bad thing.”
In 2023, during a race for the Republican Party’s nomination for president, Florida Gov. Ron DeSantis’ campaign shared an image that appeared to be fake where President Donald Trump was hugging Anthony Fauci. The campaign criticized Trump’s alleged support of Fauci.
AI can make producing content easier for campaigns. Loge said it will add to the overload of political noise voters already face.
“It’s going to make the goop, which feels like political campaign rhetoric, even goopier,” Loge said. “There’ll be more stuff coming at voters faster and at greater volume.”
Benefits of AI use for political campaigns
Supporters of AI say it could be used to make campaigns more efficient by streamlining communication between volunteers, staff and others, Loge said.
Running a campaign can be expensive, and Loge said AI could be used by candidates to avoid hiring staff or consultants.
“It lowers the bar to entry,” he said. “It allows more people to participate in politics, which is arguably a good thing.”
Of course, if campaigns are using AI to do work previously done by staff, it could cost human jobs.
“You’re going to be replacing interns and junior staff who used to write press releases and fundraising emails with computer programs that’ll be writing those things,” Loge said.
But humans could be part of the solution to issues presented by AI. As the technology continues to improve, Loge said volunteers and staff will become increasingly important to campaigns.
He said voters will likely be looking to talk with neighbors, volunteers and other people to sort out what’s real or fake.
“This actually makes politics, ironically, more human, not less, because it’d be a greater need for human connection and campaigns than ever before,” he said.
Get breaking news and daily headlines delivered to your email inbox by signing up here.
© 2025 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.
-
The Travel Revolution of Our Era3 weeks ago
‘AI is undeniably reshaping the core structure of the hospitality ecosystem’: Venu G Somineni
-
Brand Stories1 week ago
The Smart Way to Stay: How CheQin.AI Is Flipping Hotel Booking in Your Favor
-
Mergers & Acquisitions1 week ago
How Elon Musk’s rogue Grok chatbot became a cautionary AI tale
-
Brand Stories2 weeks ago
Voice AI Startup ElevenLabs Plans to Add Hubs Around the World
-
Mergers & Acquisitions1 week ago
Amazon weighs further investment in Anthropic to deepen AI alliance
-
Asia Travel Pulse2 weeks ago
Looking For Adventure In Asia? Here Are 7 Epic Destinations You Need To Experience At Least Once – Zee News
-
Mergers & Acquisitions1 week ago
UK crime agency arrests 4 people over cyber attacks on retailers
-
AI in Travel2 weeks ago
‘Will AI take my job?’ A trip to a Beijing fortune-telling bar to see what lies ahead | China
-
Mergers & Acquisitions2 weeks ago
ChatGPT — the last of the great romantics
-
Mergers & Acquisitions1 week ago
EU pushes ahead with AI code of practice
You must be logged in to post a comment Login