Banks find AML "Ineffective" because the premise is erroneous. KYC is a sham because bankers don't actually know their customers and checking that you have a valid ID before you can open an account has no value. Criminals, or their straw men, have a valid ID.
To detect crime you need detectives to investigate crimes. The transaction record by itself means nothing because the banks have no way to know what any of the transactions are actually paying for. But if you independently know that someone is buying cocaine or selling state secrets then you don't need their financial records to arrest them.
Digging into anybody's Facebook isn't going to change that because posting about your criminal activities on a public forum is already how dumb criminals get arrested. This is nothing but a data grab by the banks.
I agree with you. I think the only solution is for politicians to stop being soft on crime by funding police, prosecutors, courts, prisons, and rehabilitation services (rehab for both criminals and addicts). In addition the various tax authorities need to be empowered and funded to investigate tax fraud of all kinds and if caught then people need to be sent to prison rather than allowed to buy their way out of a sentence.
Most of that stuff isn't even related to this. Corporations and rich people don't avoid taxes by lying to banks, they avoid taxes by hiring tax lawyers or lobbying the government. The best way to put the drug cartels out of business is to let Pfizer and Merck beat them on quality and undercut them on price and then use the tax money to fund rehab and education programs.
The actual crimes this is supposed to prevent aren't all that common, and to the extent that they aren't prosecuted it's largely because the perpetrators have political power -- or are themselves nation states -- not because we don't have enough prosecutors to do it. Especially if we would actually put the drug cartels out of business in the aforementioned way.
We certainly don't need more people in prisons -- largest prison population in the world, how's it working?
I suspect that this is more about detecting stuff like scammers and laundering, not people buying drugs. Unless we start drone striking Indian call centers or start doing full blockades on Russia the only solution is to work on international relations so that these crimes can be adequately investigated.
Even if a piece of information isn't informative on an individual basis, it might provide improve the performance of a model that combines it with other inputs.
You don't need to posting about doing crimes for your FB profile to be useful for risk assessment. Your connections or other posts (e.g. about how some emergency means you're suddenly short of cash) could contribute to an elevated score.
The proposal doesn't go into much detail about how they imagine it will work, but it sounds like they're just saying that a statistical model (based on a larger set of data) would be more useful than a set of rules based on a tiny set of inputs. This seems entirely reasonable.
Here is the relevant paragraph:
Input data for MSA platforms should incorporate not only transactional data, customer static data and internal reference lists, but also other dynamic behavioural customer information where proportionate to the risk (e.g. device ID, IP addresses). Using an RBA, input data may also include data from reputable external, publicly available sources, including information on company structures, Ultimate Beneficial Owners (UBO), and watch lists, as well as complementary sources such as market data and verified customer social media accounts. Finally, FIs should establish robust data governance and quality control frameworks.
Kind of true, yes, in theory, because FATF is largely controlled by the US and it enforced US-style rules to the rest of the world.
However, on the practical level AML/KYC works differently outside of the US. Regulator would create well-defined criteria for banks to execute: To onboard a customer bank must do this and this, check that field against that record.
In the US regulators just washed their hands and placed the burden on banks: banks must make all the "reasonable" efforts to "know" their customer and what is is deemed "reasonable" varies across banks, branches and even individual clerks.
> Your connections or other posts (e.g. about how some emergency means you're suddenly short of cash) could contribute to an elevated score.
This sounds more like a way to send desperate people into a death spiral by raising interest rates or denying credit availability right at the worst possible moment than any kind of system that would prevent crime.
> The proposal doesn't go into much detail about how they imagine it will work, but it sounds like they're just saying that a statistical model (based on a larger set of data) would be more useful than a set of rules based on a tiny set of inputs. This seems entirely reasonable.
It's implying they want to invade everyone's privacy to set up a hallucinatory black box that destroys lives at random under the guise of... I'm not even sure what this is supposed to do. It's not going to yield anything that would be useful in a prosecution so it's more like opaque government-facilitated denial of service without due process. Far from reasonable, it's total crap.
> opaque government-facilitated denial of service without due process
That's not what the report is about.
Banks and financial institutions are required to file Suspicious Activity Reports (SARs) and Suspicious Transaction Report (STRs). Filing such a report doesn't stop a transaction from happening.
The report suggests that banks think they could improve the usefulness of these reports and reduce the number of false positives, if they had more freedom to choose the criteria.
> Banks and financial institutions are required to file Suspicious Activity Reports (SARs) and Suspicious Transaction Report (STRs). Filing such a report doesn't stop a transaction from happening.
It kind of does though, because banks are also not allowed to do business with criminals, so if they suspect you of being engaged in criminal activity -- even if they have no real proof -- guess what they do.
Worse than that, they're prohibited from telling you about the SARs, so you have no way to dispute them if they're inaccurate. It's a Kafkaesque absurdity that should simply be abolished rather than attempting to polish the turd with AI pixie dust.
> The report suggests that banks think they could improve the usefulness of these reports and reduce the number of false positives, if they had more freedom to choose the criteria.
Which is bollocks, but the government keeps pressuring them to do something because the system doesn't work (because banks shouldn't be doing it to begin with), so they're proposing to invade everyone's privacy because they want to do it anyway and this is a convenient excuse.
It feels like we're gradually slipping into a dystopian nightmare world, predicted again and again by sci fi writers for decades, with no real strong opposition. There's the EFF, I guess? But advocacy groups don't seem particularly powerful or influential. I can't even hope for a "it'll get worse before it gets better" scenario. A world with normalized lack of privacy seems unlikely to suddenly care about it again. Perhaps we've been left behind by the cultural tide, and our values will die with us.
The relevancy of advocacy groups is at an all time low because of just how little the professionally informed care these days. Tech was a "community" when the grifters slid in wanted something from the rest of us, and YOLO fuck you I got mine backstabbery when anyone suggested maybe not selling out our privacy quite so hard.
And it infects the rest of us. When our friends and colleagues have jobs swinging pickaxes at our societal expectations and standards of privacy, why do we pretend they're morally neutral in the whole thing?
It's already very difficult to avail banking if you don't carry a smartphone. I have a very inactive social media and I am working towards reducing my dependence on smartphones, at this point my most viable option is to move to another country that hasn't "modernised" their banking infra.
> It's already very difficult to avail banking if you don't carry a smartphone.
I think it depends. I never use my smartphone for banking purposes and it doesn't make anything difficult for me at all. But I'm in the US and I know there are other nations where this isn't at all true.
As someone who hasn't had social media for several years, unless you count this site for some reason, this is mildly concerning. Do I just have to make a burner Facebook account so my bank will let me access my own money?
Before throwing stones, I knew it was a mistake going in, but I needed a job desperately and I was thinking I will sabotage them, from the inside.
Imagine my surprise when I realised I don't really need to sabotage them - they're doing it already on their own.
Every 6 months we got mandatory AML trainings - watching a couple of videos and answering some fairly basic questions. Every time, the same videos and the same questions - I had colleagues who simply wrote down the correct answers and completed this in 10 minutes.
I had colleagues who were actively helping (low level) drug dealers - taking their cash, depositing it their own private accounts and then transferring the funds to some other 3rd party (fairly low amounts, under 10k).
BaFin (the German banking supervisory authority) is useless (see Wirecard, Solaris, N26) - I personally made reports about another financial institution - got a boilerplate reply after a year that they'll look into it and nothing since then (this was more than a year ago).
I left the banking and fintech world and will never go back (unless I have a real possibility to sabotage them from the inside).
The cherry on top: I sold my car to a private individual and they paid me cash. We, of course, did a (ADAC) contract and the next day went to my bank (Postbank, part of DB) to deposit the funds (around 15k).
The clerk was on the verge of calling police because I had so much money on me. Even showing her the contract, explaining that I work in DB and I know what the AML regulations are etc made no difference. In the end I had to deposit 9.900 in my account and the rest in my wife's account. (you can't retrieve more than 10k, but there is no restriction on deposits, you just need to show ID)
I'm in that same boat. My guess is it's going to end up being just like having no credit previously. The advice is to open a credit card and use it a little bit every month and pay it off entirely to build up your credit. They will recommend creating social media accounts and posting a little bit regularly so that your social media credit score can build.
I'm afraid we are building ourselves a tech dystopia, not the bright future that most of us have thought.
> They will recommend creating social media accounts and posting a little bit regularly so that your social media credit score can build.
Hey, now there's a great use for genAI. If we decline to the point where social media use is basically required, I'd totally just set up an LLM to post made-up stuff automatically for me so I won't have to bother doing it myself.
They are not talking about that, they are trying to catch money laundering and traffics etc. And using that as an excuse to suck all of people data for marketing.
When you read the actual statement it's reporting on, and ctrl+f "social", you find that it is used exactly once.
> Input data for MSA platforms should incorporate not only transactional data, customer static data and internal reference lists, but also other dynamic behavioural customer information where proportionate to the risk (e.g. device ID, IP addresses). Using an RBA, input data may also include data from reputable external, publicly available sources, including information on company structures, Ultimate Beneficial Owners (UBO), and watch lists, as well as complementary sources such as market data and verified customer social media accounts. Finally, FIs should establish robust data governance and quality control frameworks.
In other words, do a thorough investigation based on all of these inputs. If you're looking at potential money laundering through a business, go to Companies House and learn as much as you can about the business to determine how legit it is. When you're looking a potentially fraudulent customer, look at their digital footprint online, including social media to determine how legit they are.
This is standard practice in banks today. What the statement suggests is that in addition to monitoring individual transactions, banks should periodically monitor customers as well.
The statement doesn't call for "access", nor does it even use the word "access" in the main text. It's simply clickbait, which people reading the headline + comments alone have fallen for.
When I moved to the UK, I was advised to open a bank account with either Monzo or Starling, which are the two online-only startup banks in the UK. (They're actual banks, not resellers backed by other banks, or money transfer apps like CashApp.) This is because their identity checks involved scanning a single identity document and sending a video of myself saying that I wanted to open a bank account. All the traditional 'High Street' banks wanted all sorts of documentation - Council Tax bills, infamously - that I simply didn't have, because I'd just arrived in the country.
I know why all this is necessary. London remains the financial capital of the world, and launders a lot of dirty money. Monzo in particular has strict limits on cash withdrawals and deposits, because their accounts are often used in the illicit drug trade.
I have no objection to AML and KYC regulations, but it's absolutely crucial that legitimate people who don't fit the usual patterns - people like recent immigrants - have access to the banking system.
The social media aspect of TFA is attention-grabbing, but I think it's a red herring. The more concerning aspect is the suggestion of AI. To what extent will those algorithms be used, and to what supervision will they be subject?
In the 1980s, my father was an 'Automation Engineer' in a bank. He built an 'artificial intelligence system' for assessing home loan approvals. It was one of those fill-out-a-form-and-we'll-tell-you-how-much-you-can-borrow things that are all over the Internet now - entirely deterministic and transparent.
That's how banking needs to work, at least for individuals. Determinism engenders trust.
Modern AIs, on the other hand, are for algorithms that are too complex to be explained. If you've got those controlling people's access to finance - not just loans, but money they've deposited in their own bank account - we have a serious problem.
'Cash is King' is not a solution. Before my Dad worked at the bank, he worked in the payroll department at a soap company. He had to drive to the factory with the paypackets - one for each employee, containing their wages in cash - each week, and hand them out. That's utterly ridiculous.
> Monzo in particular has strict limits on cash withdrawals and deposits, because their accounts are often used in the illicit drug trade.
ATMs often have fees that the bank will pay for. For example, most ATMs have an upsell at the end to "check your balance" that your bank will pay for. On the other hand, card payments generate a small amount of revenue for the bank. Most payments in the UK are card payments or bank transfers.
I'll have to say you're mistaken. Monzo doesn't have limits on cash withdrawal as an AML control.
I have long since believed that maintaining a token "acceptable" social media presence is much more powerful than abstaining with regards to privacy/anonymity. The two suspicious things are overuse/misuse of social media, and no social media at all.
It's like having a honeypot of easily found money in your home (barely hidden in a sock drawer, etc) so that a potential thief will be satisfied with the find and not dig deeper.
Stories like this only solidify the power of maintaining a controlled social media presence.
China's social credit system requires actual recorded offenses to trigger penalties, in particular delinquency in paying fines or tax, not some bank's AI inhaling and rating your Facebook account history.
The "Social" in the name is nothing to do with social media.
> However, the Group believes strongly that the explicit focus on the provision of more highly useful
information to relevant government agencies, and feedback from them on the information provided,
will yield dividends in the form of more effective measures being taken against criminals and their
illicit activity.
So they want to superhighway all of your transactions to your government and then let it judge you without any due process. I don't see how that's any good.
If you’re still using social media after e.g. the 2016 Cambridge Analytica scandal, the Twitter Files, etc, you kinda deserve this. You’ve been warned multiple times.
Is this really worthwhile? We already see that there are huge bot networks on social media that effectively hide from automated scanning with fake activity, made even easier these days with genai. The only way to combat this is for social media to have KYC(which at this point, I support), but that just offloads the problem on to social media.
When large corporations like banks say they want "access to social media," what they mean is they would love to get their hands on something like this:
Indeed I’d be surprised something like this doesn’t already exist. This article almost reads like a press release for the product that has been developed. What it needs is an air of legitimacy, hence the appeal to a public-private partnership.
I immediately thought that they'd also start selling the newly acquired data in some form, since nowadays it seems that everybody wants to be in the spying business.
My bank has recently started asking, from time to time, in the banking app for me to grant permission for collecting additional data. Thanks god for GDPR... IMO, banks should not overcomplicate things and should keep things simple for users.
That link basically say a unified Social Credit system doesn't actually exist as imagined and the heavily covered Zhima Credit system basically just didn't even work.
Though I'm sure there's no shortage of AI snake-oil vendors who promise banks the world over that they can do it but it "just needs more data to train on, 6 months tops, trust me bro".
To detect crime you need detectives to investigate crimes. The transaction record by itself means nothing because the banks have no way to know what any of the transactions are actually paying for. But if you independently know that someone is buying cocaine or selling state secrets then you don't need their financial records to arrest them.
Digging into anybody's Facebook isn't going to change that because posting about your criminal activities on a public forum is already how dumb criminals get arrested. This is nothing but a data grab by the banks.