Maybe the two public data points weren't connected before?
I don't use SoundCloud, but if profiles didn't have contact information like Email Address on them then it could be meaningful to now connect those two dots.
Like, 'Hey look, Person A, who is known to use email address X, kept Lost Prophets as one of their liked artists even after 2013!'
SoundCloud is a weird place, people in entertainment have certain strong incentives. They figured out who I am, figured out all the email addresses I have, jacked the account attached to my SoundCloud, stole my account. I still to this day, don't know how they pwned my email (tfa was on but it didn't trigger suspicious activity it let them login without triggering it, no clue how they got the password either and the password is secure enough that it's too hard to brute force, and it's not in a pwned db). Based on what was in my soundcloud inbox when I got access again, someone paid a fair amount to have this done... and now I have to go change my email again I suppose.
You are 100% correct based on article. Not good that you're gray, and your parent of "who cares it was already available and scraped" is the top comment.
But, why care? (Yes, we can “care” that there was a leak - but… why worry? what new risk exists today that didn’t yesterday?)
The data in the leak (other than follower count, etc) was already available for purchase from Zoominfo, 8sense, or a variety of other data brokers or other legal marketplaces for PII.
I suppose the risk now is that the data is freely available and no longer behind a data broker’s paywall?
Let's say you have a $SOCIETAL_TABOO streak and let it out via a soundcloud account that isn't identifiable as you without your email.
Now it is.
Now I can blackmail you or haunt you.
(I'm sure there's other examples, tl;dr people are deanonymized, there are uncountable reasons why people choose anonymity)
> The data in the leak (other than follower count, etc) was already available for purchase from Zoominfo, 8sense, or a variety of other data brokers or other legal marketplaces for PII.
> Ingress being disabled doesn’t really net you all that much nowadays when it comes to restricting malware.
But how much of this is because ingress is typically disabled so ingress attacks are less valuable relative to exploiting humans in the loop to install something that ends up using egress as part of it's function.
Since we're talking about programs that are trying to set up a connection no matter what, I'm going to say "not much". It's not significantly shrinking the attack surface and forcing attackers onto a plan B that's meaningfully harder to do. It just adds this layer of awkwardness to everything, and attackers shrug and adapt.
You block inbound to block inbound. Of course it doesn’t do anything for outbound. Acting like you can just turn inbound filtering off because of that is disingenuous.
> In my opinion most of the people who refuse to believe AI can help them while work with software are just incurious/archetypical late adopters.
The biggest blocker I see to having AI help us be more productive is that it transforms how the day to day operations work.
Right now there is some balance in the pipeline of receiving change requests/enhancements, documenting them, estimating implementation time, analyzing cost and benefits, breaking out the feature into discrete stories, having the teams review the stories and 'vote' on a point sizing, planning on when each feature should be completed given the teams current capacity and committing to the releases (PI Planning), and then actually implementing the changes being requested.
However if I can take a code base and enter in a high level feature request from the stakeholders and then hold hands with Kiro to produce a functioning implementation in a day, then the majority of those steps above are just wasting time. Spending a few hundred man-hours to prepare for work that takes a few hundred man-hours might be reasonable, but doing that same prep work for a task that takes 8 man-hours isn't.
And we can't shift to that faster workflow without significant changes to entire software pipeline. The entire PMO team dedicated to reporting when things will be done shifts if that 'thing' is done before the report to the PMO lead is finished being created. Or we need significantly more resources dedicated to planning enhancements so that we could have an actual backlog of work for the developers. But my company appears to neither be interested in shrinking the PMO team nor in expanding the intake staff.
Life doesn't have down time. Should we avoid learning new things because no one is paying us to learn?
One of my favorite uses of AI is to quickly make some simple 'hello world' level application that I can run using a given technology.
Don't know what an MCP server is? Boot up Kiro and tell it you want to make a sample MCP server and ask it for suggestions on what the MCP server should do. A relatively short while later, with a lot of that time being spent letting AI do it's thing, and you can have an MCP server running on your computer. You have an AI waiting for you to ask questions about why the MCP server does x y or z or how can you get the server to do a, b or c etc
As someone who learns a lot better from doing or seeing vs reading specs, this has been monumentally more efficient than searching the web for a good blog post explaining the concept.
And when I'm doing these learning exercises, I naturally lean towards the domain my company is in because it's easier to visualize how a concept could be implemented into a workflow when I understand the current pain points of that workflow.
I'm not going home and pulling in story's from my board and working on them (generally), I'm teaching myself new concepts in a way that also positions be to contribute better to my employer.
Not necessarily AI with 'hard guidelines' AI tools that pass output to a filter with 'hard guidelines' is definitely feasible.
Take the input as normal, pass it into Sora 2 and execute it as you would, pass the output through a filtering process that adheres to hard guidelines.
Of course, when talking about images, what is a 'hard guideline' here? Do you take the output and pass it through AI to identify if there's x y or z categorys of content here and then reject it?
I don't use SoundCloud, but if profiles didn't have contact information like Email Address on them then it could be meaningful to now connect those two dots.
Like, 'Hey look, Person A, who is known to use email address X, kept Lost Prophets as one of their liked artists even after 2013!'
reply