To me is strange that for such important document they didn't print them and scan with a scanner (that way it's physically impossible that some metadata or other thing that is not on the printed piece of paper ends up in what is released).
AI is a broad term going back to 1955. It covers many different techniques, algorithms, and topics. The first AI chess programs (DeepBlue, et. al.) were using tree search algorithms like alpha-beta pruning that are/were classified as AI techniques.
Machine translation is a research topic in AI because translating from one language to another is something humans are good at while computers are not traditionally.
More recently, the machine learning (ML) branch of AI has become synonymous with AI as have the various image models and LLMs built on different ML architectures.
Why they are a security feature? They are not, the article even says it. Even if UUID4 are random, nobody guarantees that they are generated with a cryptographically secure random number generator, and in fact most implementations don't!
The reason why in a lot of context you use UUID is when you have a distributed system where you want your client to decide the ID that is then stored in multiple systems that not communicate. This is surely a valid scenario for random UUID.
To me the rule is use UUID as a customer-facing ID for things that has to have an identity (e.g. a user, an order, etc) and expose it publicly through APIs, use integer ID as internal identifier that are used to create relations between entities, and interal IDs are always kept private. That way numeric ID that are more efficient remain inside the database and are used for joining data, UUID is used only for accessing the object from an API (for example) but then internally when joining (where you have to deal with a lot of rows) you can use the more efficient numeric ID.
By the way, I think that the thing of "using UUID" came from NoSQL databases, where surely you use an UUID, but also you don't have to join data. People than transposed a best practice in one scenario to SQL, where its not really that best practice...
If a sequential ID is exposed to the client, the client can trivially use it to determine the number of records and the relative age of any records. UUID solves this, and the use of a cryptographically secure number generator isn't really necessary for it to solve this. The author's scheme might be similarly effective, but I trust UUIDs to work well. There are obviously varying ways to hide this information other than UUIDs, but UUIDs are simple and I don't have to think about it, I just get the security benefits. I don't have to worry about not exposing IDs to the clients, I can do it freely.
I have never seen anyone post an actual example of the German Tank problem creating an issue for them, only that it’s possible.
> I don’t have to think about it
And here we have the main problem of most DB issues I deal with on a daily basis - someone didn’t want to think about the implications of what they were doing, and it’s suddenly then my emergency because they have no idea how to address it.
If you can predict user IDs this is extremely useful when you're trying to come up with an exploit that might create a privileged user, or perhaps you can create some object you have access to that is owned by users that will be created in the near future.
When I say "I don't have to think about it" I mean I don't have to think about the ways an attacker might be able to predict information about my user ids which they could use to gain access to accounts, because I know they cannot predict information about user ids.
You are dismissing the implications of using something that is less secure than UUIDs and you haven't convinced me I'm the one failing to think through the implications. I know there are performance problems, I know they might require some creative solutions. I am not worried about unpredictable performance issues, I am worried about unpredictable security problems.
Perhaps this is my bias coming through. I work with DBs day in and day out, and the main problem I face is performance from poorly-designed schemas and queries; next largest issue is referential integrity violations causing undefined behavior. The security issues I’ve found were all people doing absurdly basic stuff, like exposing an endpoint that dumped passwords.
To me, if you’re relying on having a matching PK as security, something has already gone wrong. There are ways to provide AuthN and AuthZ other than that. And yes, “defense in depth,” but if your base layer is “we have unguessable user ids,” IME people will become complacent, and break it somewhere else in the stack.
> We generate every valid 7-digit North American phone number, then for every area code, send every number in batches of 40000
> Time to go do something else for a while. Just over 27 hours and one ill-fated attempt at early season ski touring later, the script has finished happily, the logfile is full of entries, and no request has failed or taken longer than 3 seconds. So much for rate limiting. We’ve leaked every Freedom Chat user’s phone number
If you put an index on the UUID field (because you have an API where you can retrieve objects with UUID) you have kind of the same problem, at least in Postgres where a primary key index or a secondary index are more or less the same (to the point is perfectly valid in pgsql to not have any primary key defined for the table, because storage on disk is done trough an internal ID and the indexes, being primary or not, just reference to the rowId in memory). Plus the waste of space of having 2 indexes for the same table.
Of course this is not always the case that is bad, for example if you have a lot of relations you can have only one table where you have the UUID field (and thus expensive index), and then the relations could use the more efficient int key for relations (for example you have an user entity with both int and uuid keys, and user attribute references the user with the int key, of course at the expense of a join if you need to retrieve one user attribute when retrieving the user is not needed).
> Even further, there is technology to encrypt from server to screen. I'm not sure on the rollout on this one. I think we have a long time until this is implemented, and even then, I'm sure we will have the ability to buy screens that fake the encryption, and then let us record the signal. And, for mainstream media, there will be pirated copies until the end of time I think.
In the end, nobody will ever avoid people from having a camera pointed to a screen. At least till they can implant a description device in our brain, the stuff coming out of the screen can be recorded. Like in the past when people used to record movies at the cinema with cameras and upload them on emule. Sure, it would not be super high quality, but considering that is free compared to something you pay, who cares?
To me DRM is just a lost battle: while you can make it inconvenient to copy a media, people will always try to find a way. We used to pirate in the VHS era and that was not convenient, since you would have needed 2 VCR (quite expensive back then!) and it took the time of the whole movie to be copied.
It's a lost battle in the purist sense, but impure things can go far in real life. DRM is like my lock on the door. I'm sure it's a joke for LockpickLawyer and even a good many more people out there, but, it has successfully defended my household so far.
DRM just raises the bar a bit for access. For example in gaming, it gives the publishers a head start over pirates. If the game is unavailable for pirates during the largest hype, a lot more people buy the product than otherwise.
Also, sometimes DRM wins. For example, right now, Denuvo is undefeated. Some hardware dongle authenticated software are also unavailable in pirated form. Of course one could argue that eventually these would be defeated as well, but, DRM still served its purpose, in defending the product from unauthorized copying in times when it was most desirable.
To me, DRM hasn't made sense when I was looking at it from a Free Software standpoint, but it now makes sense from a product management standpoint.
I think because it cost money and they get little benefit on doing so.
Major platform like Netflix etc. don't implement that DRM since they care, it's because they content they distribute requires that they employ that measures, otherwise who produces the content doesn't give it to them. Content on YouTube does not have this requirement.
Also: implementing a strict DRM on all videos is probably bad for their reputation. That would restrict the devices that are able to play YouTube, and probably move a lot of content creators on other platforms that does not implement these requirements.
You can't shut down 2G, because there are a lot of devices, mainly embedded systems like alarms, lift emergency call button, GPS trackers, etc. that still use 2G. Also 2G is the only reliable network connection in a lot of areas that are not otherwise reached by 3G/4G/5G, mainly because a 2G connection is more tolerant to low signal and noise, and also is low frequency, thus 2G is the only option available in situations such as on top of mountains and stuff. And finally there is still a lot of people, maybe elders, that don't have/want a smartphone (mainly because they are more complex to use etc.) and still use an old Nokia with 2G networks (they only need to call or send SMS in the end).
Also: VoLTE is not a thing since a lot of years, and probably there are even a ton of smartphones out there that does not support it (and thus switch back to 2G/3G to place voice calls).
You claim that they cannot be disabled, while this is already happening [0][1][2]. Some countries, like Switzerland, already completed the shutdown years ago [2].
It's the same "we can't introduce chip-and-pin because of all the credit card readers" argument that kept carding an issue a decade longer in the US than in the EU.
Ever since the analog TV shutdown and the refarming of those low frequencies to 4G LTE, you see that 4G actually has higher coverage than 2G/3G (this both in sparsely-populated places like Australia and dense ones like Japan).
And for 2G especially GSM has a physical cell size limit due to TDMA that LTE does not have so in sparse areas the same transmitter location can reach further.
If in your country 2G still has better coverage that's not due to technical superiority of the standard but due to decisions made by the operator.
I think you are wrong. 2g and 3g is slowly but surely being killed everywhere. Which is a shame because it's much easier on batteries then 4g, but they want that bandwidth.
For example, lower-frequency bands have longer reach, but lower bandwidth. Because everyone has 2G support, it makes sense to put 2G on the lower frequencies as fallback, with 3G/4G/5G on higher frequencies as optional bandwidth booster. But this also means 5G reliability is being limited by its frequency! You could have better 5G - if it could use the frequency currently occupied by 2G...
It also doesn't help that 2G and 3G aren't forward-compatible. They require their own dedicated frequency, so you need to sacrifice a lot of potential bandwidth for a small number of low-data devices. 4G and beyond can play nice with future tech: a single base station using a single frequency can handle both 4G and 5G connections at once - it'll dynamically allocate the air time as needed.
About the elderly: my 95-year-old grandma uses a tablet for video calls and a big-button 4G-capable feature phone. My 85-year-old other grandma has fully embraced her smartphone. Turns out they really like seeing pictures of their great-grandkids! Give them a reason to switch and they will adopt it - they both ditched their land lines.
Same with elevators and stuff: schedule a kill date 5 years into the future and they'll be replaced by 4G-capable units instead of ancient slightly-cheaper 2G-only ones when their warranty inevitably expires.
It's not that simple. There are a ton of legacy systems that upgrading would cost a lot of money and it's not the fact or replacing a 100 euros smartphone. A lot of these systems have a critical (safety) function, and thus if they stop working there would be consequences (I've mentioned the elevator alarm, but consider alarms for plants in remote areas that use 2G to send out alarms, let's say a pumping station for sewer, remote sensors in the mountains, dataloggers, electronic bracelet given to people that has restrictive sentences, etc).
This is the same reasoning why they keep active the "old" analog telephone network, why not everyone is switched to VOIP, because there are situations where it's still used by stuff that is critical or too expensive to replace.
> with 3G/4G/5G on higher frequencies as optional bandwidth booster.
There are 5G bands in the ~700MHz bandwidth (that was recovered by switching to more efficient encoding for DTV) that could be used that are even lower than 2G that is around 900MHz.
They could (and probably will) dismiss 2G for consumer use, but keep some frequencies that are used by operators that provide MTM SIM.
> Give them a reason to switch and they will adopt it - they both ditched their land lines.
I've tried to make my grandma learn how to use a phone to send SMS multiple times and failed. If she uses a mobile phone (rare situation) she uses it as a landline phone, that is type the number that she wants to call, not even using the contacts in the phone. To be fair I had difficulties explaining how to use a cordless landline phone.
Speaking of elderly, there are a lot of them that have dedicated devices that they can use to make emergency calls to registered numbers, that probably use 2G network (some other use even landline). Since these devices are even provided for free by the national healthcare system, I see that there is not much money to spend to upgrade them.
BTW, are we sure that all the smartphone out there support VoLTE? If not, to make phone calls they need to fallback to 3G/2G, it was a common problem not many years ago, with some providers (Iliad) that even started supporting VoLTE like less than 2 years ago...
Interestingly, my country turned off 3G before 2G.
I think the reasoning was that the heavy data 3G users had already upgraded to 4G and beyond, and low data 3G users could fall back to 2G, so retiring 3G would have negligible impact - while opening up a lot of bandwidth for 4G and 5G.
On the other hand, there were plenty of 2G-only low data users around, so retiring that early would break stuff for a lot of people. Keeping it around longer gave people more time to upgrade.
It's already been completely shut down in large swaths of the world.
> mainly because a 2G connection is more tolerant to low signal and noise
Huh? Everything I've heard about 2G indicates that it is an incredibly noisy protocol with horrible congestion characteristics, and that it craps out even when there's only a few devices. Maybe it's only winning because it has nearly completely disappeared?
They do that now because they care about your security, but to make it difficult to modify (jailbreak) your own devices to run your own software that is not approved by Apple.
What they do is against your interests, for them to keep the monopoly on the App Store.
I don't thing Google will enforce this verification as an option that cannot be disabled. Not because they care about open-source, but because there are contexts where Android is used where the device doesn't have an internet connection to contact Google services to verify apps that are installed by whatever deployment method is used. I talk about all the industrial contexts where the devices (terminals that operators use) doesn't connect to the internet but to a local network that is only used to communicate internally with the server the application is using.
By the way, if that is truly implemented and not bypassable using some methods such as some developer option, I think that I will return to running a custom ROM (hoping that they would not start restricting also the possibility to unlock the bootloader, fortunately that is up to the manufacturer and you would still find phones with unlockable bootloader, or just get an older phone).
It probably doesn't require a network connection for basic checking, as the signed key can be cryptographically checked even when offline as long as Google preloads their public keys to the phones
> It probably doesn't require a network connection for basic checking, as the signed key can be cryptographically checked even when offline as long as Google preloads their public keys to the phones
It's the standard practice.