this post was submitted on 25 Nov 2024
58 points (98.3% liked)

Cybersecurity

5745 readers
391 users here now

c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.

THE RULES

Instance Rules

Community Rules

If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.

Learn about hacking

Hack the Box

Try Hack Me

Pico Capture the flag

Other security-related communities !databreaches@lemmy.zip !netsec@lemmy.world !cybersecurity@lemmy.capebreton.social !securitynews@infosec.pub !netsec@links.hackliberty.org !cybersecurity@infosec.pub !pulse_of_truth@infosec.pub

Notable mention to !cybersecuritymemes@lemmy.world

founded 1 year ago
MODERATORS
 

As artificial intelligence (AI) continues to revolutionize industries, the cybersecurity field faces a dual-edged sword of opportunities and threats. StrongDM's latest report, "The State of AI in Cybersecurity," highlights the growing concerns and readiness of cybersecurity professionals to tackle AI-driven challenges. Based on a survey of 600 cybersecurity professionals, the report sheds light on pressing issues around AI regulation, perceived threats, defense confidence, and the future of the cybersecurity workforce.

Key Findings from the Survey:

Regulation Concerns: 76% of cybersecurity professionals believe AI should be "heavily regulated" to prevent misuse, underscoring the need for balance between safety and innovation.

AI-Driven Threats: A significant 87% of respondents expressed concerns about AI-driven cyberattacks, with malware (33%) and data breaches (30%) ranking as top threats.

Preparedness Levels: Only 33% of professionals feel "very confident" in their current defenses, and 65% of companies admit they are not fully prepared for AI-powered attacks.

Workforce Impact: Despite challenges, two-thirds of respondents feel optimistic about AI's potential to enhance, rather than replace, jobs in cybersecurity.

top 7 comments
sorted by: hot top controversial new old
[–] kbal@fedia.io 17 points 1 day ago

The attack we're warned about: Unstoppable AI-powered distributed adaptive quantum breach tools.

The attack that actually happens: Employees accepting bribes from ransomware gangs.

[–] krolden@lemmy.ml 10 points 1 day ago (2 children)

600 cyber security professionals

lol. That's a very small number of 'professionals'. Which by the way just means they work in 'security' in some capacity. That doesn't mean they are good at it or know what theyre talking about.

Why are these articles always crying over a lack AI regulations instead of the massive data collection operations that are allowed to continue collecting data to build their models? The only regulation we need is upholding our fourth amendment rights. But we all know that's never going to happen.

[–] sugar_in_your_tea@sh.itjust.works 4 points 1 day ago (1 children)

100% agreed. I'll take this a step further and suggest we need a constitutionally recognized right to privacy (to protect people from governments) as well as a statutory protection of individual privacy (to protect people from companies). Opt-out privacy violations would become illegal, which should dramatically cut down the worst of it.

Follow that up with reduced copyright durations and stronger copyright protections, which would increase the amount of legally usable training data, while restricting it from the most recently published data.

I think this could solve a number of cybersecurity-related issues, especially if the law states that companies are explicitly legally liable for protection of any PII they collect, which is in line with their duty to safeguard the privacy of their customers.

[–] taladar@sh.itjust.works 2 points 1 day ago (1 children)

Honestly, I would just get rid of copyright completely. The right to copy data makes no sense in a modern context. Replace it with rights related to allowed uses of data to be determined by the people who produce the data and the people the data is about.

Replace it with rights related to allowed uses of data to be determined by the people who produce the data

That's essentially what copyright is! It has more to do with who has the legal right to access, modify, and distribute content than it does about copying. You can make as many copies as you want of content you legally have access to, you just can't share it with anyone you aren't permitted to share it with.

I think copyright is a good thing in general because it gives producers of content some rights to protect the content they've created, which means they have an opportunity to profit from it before competitors can take that content and redistribute it themselves. if there was no copyright protections, the moment you publish something, a competitor will rush to reproduce it and publish it far more broadly, using their much larger distribution network to cut you out of sales.

The problem is that it also restricts modifications, so if someone produces something, you need to be very careful to stay within the constraints of fair use or you could get hit with a lawsuit, and you can be sued even if you've done everything properly if the original creator has enough money to tie you up in courts (see Palworld v Nintendo).

people the data is about

This seems incredibly problematic because then you'd have to jump through massive hoops to write a book involving any public figure. Even if you exempt public figures from this, you could still have tons of lawsuits from people trying to take a cut from your profits if anything in the book seems to relate to them.

The main problem, IMO, with copyright is how long it's in effect for. The original intent was to protect content so the creator could have time to profit from it, but Disney has lobbied in the US to extend that to 95 years from date of publish, or 70 years after the death of the original creator, which is unnecessarily long. Copyright used to only be 14 years, with an optional 14 year extension (subject to approval), and I think that's much closer to reasonable than the current durations. I'd even go so far as to say it should be more like 5-10 years, with an optional extension that's only granted if the creator can prove they need more time to recuperate costs (perhaps a max of 20 years?).

[–] nimble@lemmy.blahaj.zone 6 points 1 day ago

How could you realistically regulate AI? Sorry if it's a dumb question, I'm just scrolling through "All".

To me it seems like if you regulated it heavily in one country then it would just be researched and made available in other countries. And bad actors could always abuse it from less regulated country models, from any open source models, or from models they built themselves.

I do agree there needs to be more safety emphasized as part of the innovation cycle, I'm just not optimistic given this is a modern day guns race