> receive side uses a per-packet interrupt to finalize a received packet
This has made much faster systems not being able to process packets at line speed. A classic was that standard Gigabit network cards and contemporary CPUs were not able to process VoIP packets (which are tiny) at line speed, while they could easily download files (which are basically MTU-sized packets) at line speed.
Fortunately, the receive ISR isn't cracking packets, just calculating a checksum and passing the packet on to LWIP. I wish there were two DMA sniffers, so that the checksum could be calculated by the DMA engine(s), as that's where a lot of processor time is spent (event with a table driven CRC routine).
You can do it using PIO. I did that for emulating memory stick slave on rp2040. One PIO SM plus two dma channels with chained descriptors. XOR is achieved using any io reg you don’t need, with 0x3000 offset (manual mentions this as the XOR alias)
Luckily the RP2040 has a dualcore CPU so one core can be dedicated entirely to receiving the interrupts, passing it to user code on the other core via a FIFO or whatever else you fancy.
Why would there be context switching? One core is exclusively running user code and polls for new pre-processed packages in some loop, the other core is exclusively running low-level network code and dealing with interrupts.
It's a Cortex M33, so there's no meaningful cache to speak off. Access to all memory takes essentially the same amount of time. If you're really worried about access time you could probably use SRAM banks 8&9 (each 4k, with their own connection to the AHB crossbar) and flip-flop between the two - but I highly doubt it's going to have a measurable impact.
if interrupt and usespace code run on the same core, there is a chance that the data will still be in the cache line of the processor and it wont have to go thru main memory.
This is stupid clickbait title, and the article isn't even very precise. Yes, that whole fritz.box situation is known and bad. But the problem discussed here doesn't nearly apply to every situation. Specifically, the box's builtin resolver (which is still used by default by a lot of things) knows not to forward fritz.box requests to the outside. That is, `dig google.com.fritz.box` and everything else say NXDOMAIN when you're using the builtin DNS.
Ah, it was looking like that to me, thanks for confirming. Like, if I force dig to use some public DNS, I get the newly registered IPs but if I use my fritzbox as the DNS server, it gives me itself / NXDomains, except for hostnames configured on the local fritzbox.
That on top of the fact that my linuxes won't use the search domain unless explicitly asked for with a single-label DNS makes this a lot less scary.
Yeah but so many things bypass internal resolvers lately. VPNs, “private relay,” individual apps. DNS over HTTP. Local control over DNS is steadily being chipped away. The result is some apps will go out to public or vendor-controlled DNS in ways typical admin tools like ping and dig might not reveal
And it might be tempting to brush this off as just an anachronism to amuse ourselves with, but IMO this undervalues it quite a bit.
For example, the Austrian teletext still has almost a million daily users (in a country of 9 million) - let that sink in.
And there's a good reason: Conceptually, Teletext (at least when it's well maintained) is the antithesis to modern information media. There's neither room nor want for clickbait headlines, padded videos, tracking libraries, SEO and so on. You get a curated condensation of current affairs in a tiny package - a few hundred pages, each 40x25 7-Bit characters. The SNR is orders of magnitude above anything else out there.
I wouldn't be so quick to crown teletext as the king of succinct media. Just on the first page of the ORF teletext channel you refer to, there are lines flashing between advertisements for online gambling, tattoos and vegan (?) products with which to protect one's bladder and prostate. In order to navigate between news stories you have to memorize series of three-digit numbers or scroll through long indexes. After that, yes, in fairness, you get a nice simple text-only news article. Shame if you actually want the pictures though.
I personally think that the Web is a worthy successor in every respect, mostly because you have so much choice in how the page is displayed. Typefaces, colours, whether or not to display pictures - it's all up to you, the reader.
Mind you, neither the numeric indices for navigation, nor the lack of pictures, is really a stumbling block for the two user-types who most heavily contributed to / constrained the design — that being 1. blind people using screen readers who wanted to access a BBS-like service providing news, weather, etc., that would consider their access needs; and 2. deaf people who were accessing a given company’s teletext system under the expectation of it serving as the visual equivalent of said company’s IVR phone tree.
(In fact, consider how well teletext UX works as an efficient, navigable information-dense directory system for both blind and deaf and motor-impaired users, all inherently such that you just design once for the constraints of the system, and you get “the right thing.” There’s a reason governments latched onto it: it really works for everybody!)
The Web in theory is a successor to teletext in serving these needs… but it was really only the Gopher / HTML1 Web that was an inherent improvement. As soon as we started nudging content around with semantically-meaningless tables and divs to look better, the Web started to not work so good for users with these interaction difficulties.
That's true, but teletext isn't theoretically more constrained than the Web is. There's nothing stopping, for instance, teletext operators from producing pixelated animations or scrolling text effects, just as there's nothing stopping Web developers from adding accessibility-hostile layouts.
In Britain, teletext hasn't been available for over ten years now, but at the same time, a department called the Government Digital Services do an excellent job of making public websites accessible - complete with ARIA labels, semantic elements, all of that sort of thing. I'd easily acknowledge that teletext was ahead of its time, but I don't lament its replacement with the Web.
>I wouldn't be so quick to crown teletext as the king of succinct media.
Show me a current example that comes even remotely close (especially one not skimping on the "curated" aspect).
>[..]mostly because you have so much choice in how the page is displayed. Typefaces, colours, whether or not to display pictures - it's all up to you, the reader.
See, that's the crux here - it's not just up to me. It's up to the media producer what kind of content they offer for me to be able choose from.
This is a fantastic example of motivated reasoning. This "change" (which apparently isn't even new) can have so many different reasons, some of which are less harmful and some of which are probably worse (privacy-wise) than the one mentioned here. There is no indication that re/mis-using permissions is specifically what they wanted to do here, there is also no example of them doing it right now. Don't get me wrong, there is also no evidence that this isn't the real reason and that they wouldn't do that in the future. But the blog post basically list a single symptom and jumps right to the one conclusion that fits what the author expects.
1. The change does exist (although it apparently has been live for quite some time in some regions at least)
2. The change does have the effect of Google gaining more permissions (and subsequently more data) than previously
3. The author assumes that (2) is the (main) reason why (1) was done in the first place
Regardless of whether (3) is correct or completely wrong - and regardless of whether the author truly believes (3), or only uses it as a rhetorical trick to increase the controversy (and therefore the reach) of their post - both (1) and (2) remain fact.
And (2) is the actual problem here - regardless of whether it was done intentionally by Google or not.
As for (3) - there's no proof either way, as you already said.
But collecting more of that data which their marketing business makes it's profits from, is likely to have a positive effect on their bottom line.
And since the change already has been live for a while in some regions, it seems likely that Google is well aware of how much impact this change has on their revenue.
You decide for yourself if money is or isn't the reason why a big corporation like Google would do something like that.
> The change does have the effect of Google gaining more permissions (and subsequently more data)
There's a huge logic gap here. Obtaining more permissions doesn't at all imply obtaining more data when it's caused by an incidental change. Maybe the permissions aren't being used outside of the Maps context, or maybe it doesn't matter because the data was already be known.
It’s true that we can’t really know whether Google is exploiting these expanded permissions to collect more data unless we have some insider information.
However, it’s generally very easy to predict what a company is going to do by observing their business model and incentive structure. In Google’s case, collecting as much data as possible is a major part of their business, so without more information, there’s no good reason to assume they won’t do it.
> It’s true that we can’t really know whether Google is exploiting these expanded permissions to collect more data unless we have some insider information.
You could track usage and see what pages on google.com are accessing these APIs.
I doubt that it's a lot. Google already has fairly good geo-localization based on IP, GPS-level accuracy isn't necessary for ads. They could've already connected your data from maps.google.com to www.google.com, because both are using consent.google.com and you're getting a .google.com unique cookie.
This is mostly just outrage because people don't understand how things work.
It may not be the only reason, but you’re being too generous if you don’t think this was at least one of the reasons they did it.
Other than some abstract “branding” campaign, I cannot really see many other reasons why they would be doing this.
And as someone who worked in adtech in the past, it was very well known that Google used their domain as their tracking cookie domain as it’s nearly impossible for adblockers to just block without crippling other functionality. So they even have a history of using precisely these types of techniques.
> but you’re being too generous if you don’t think this was at least one of the reasons they did it
If you consider it absolutely unthinkable that it was not one of the reasons, it's you who is being too generous. Unconsidered side effects occur plentiful and all the time.
This is cute, but 100% no. In this case, those involved in the decision were aware of the privacy implications. Whether this was discussed openly, or whether the change was made 'pass-the-buck' style, it doesn't really matter. The association of privacy settings with domains is a well-established basic function in the browser.
> If you consider it absolutely unthinkable that it was not one of the reasons, it's you who is being too generous.
The person you are replying to didn't use the word "unthinkable" or even imply it.
I think you are being either incredibly naive or disingenuous if you believe an adtech giant like google doesn't factor changes to data gathering into every single decision they make.
My default mode is to trust everyone until they break my trust. Now that I am old, I have realized that trusting everyone by default is not a good idea, especially big tech.
In cases like this, I think it is better to assume malice, even if we are proved wrong later. This is not our fault, this is big tech screwing with us repeatedly for years, with no shame or conscience
Exactly. If you trust people you will often be rewarded by friendship and future help. If you trust cooportations they just exploit that to maximize shareholder profit with no value to me.
Perhaps you mean persons deserve the benefit of the doubt? People seems to be the root problem.
I expect there is no difference between an individual and a corporation operated by a sole individual. If one is trustworthy, they will remain equally trustworthy if they happen to have a stock certificate in hand. The corporation isn't able to act autonomously. It acts with equivalency to the person it is represented by.
Large corporations, involving people, is where communication breaks down, which leads to unintended consequences that wouldn't necessarily be realized if an individual was acting alone. When you have people there are bound to be competing interests created in the confusion and it is not always a straightforward answer who is best to honour. Even where intentions are pure humans are bound to make mistakes in their choosing.
I think the question is whether a effective feedback loop exists.
If a local dealer does something bad they quickly receive corresponding response.
A big corp is detached and anonymous. As long as there is no broad boycott there are rare cases where response really reaches them.
If a big corp has a sales force the sales force is responsive to feedback, however the corp then quickly turns anonymous to them and whatever they put in the system doesn't reach the right places ...
Even if it's entirely innocuous at present, that's still little better. It would signal modern-day Google engineers lack the nuanced understanding and user-first deliberation of their predecessors.
Given the breadth of services the company provides, a user ought to be able to restrict the permission to the scope of the maps tool.
bro, data is money and those corporates extract as much as they can. don't try to reason that google would not be interested in exactly that. one does not have to find a specific evidence for exactly this scenario in my opinion. this evidence likely might never emerge, while the spying definitely will happen. otherwise you would need to come up with a huge scenario where they actually farm a ton of benefits by doing this change, because a move like that you don't "just do for a better experience".
> But the blog post basically list a single symptom and jumps right to the one conclusion that fits what the author expects.
That conclusion isn't wrong though. Your comment basically claims author is twisting facts but the conclusion remains that giving google.com/maps permission to geotrack does give google.com permission to geotrack.
"Pinky swear I won't enforce that clause" is not reassurance enough.
The real reason or intention isn't that important, compared to the outcomes of the change. The author correctly evaluated one of those outcomes and the respective implications.
Given Google's track record, I think it is a sensible evaluation of the situation.
When companies like Google are involved, I believe the Hanlon's Razor works in reverse. I.e. never attribute to stupidity that which is adequately explained by malice.
I will accept motivated reasoning when in a friendly setting but big tech is not my friend. Their only and only purpose is to extract as much value (data or money) from me as possible.
Looking at Heartbleed and other famous security we should know that minor mistakes "disguised" as "typos" can have devastating effects.
The change may have happened for any of many reasons. Regardless of which reason was the motivator, it's clear impact is reducing user privacy. When talking about a tracking/advertising company, so it's kinda natural to assume that this was kept in mind.
Recently I have been trying to recover my gmail account. Besides sending verification code to my phone number, it also sent a code to YouTube app, high on the list. I have lost access to my google account, so I cannot open my YouTube. So it sent a verification code to the exact gmail address I am trying to recover. The whole process is unreal. This YouTube verification thing is definitely new, I don't know the motivation behind it, it couldn't even detect if my YouTube App was activate or not (or maybe it knows I wasn't using YouTube, maybe it is encouraging me to log in YouTube or open YouTube. Either way, I am not impressed.
Meta: my answer here is probably also a good example of motivated reasoning because I likely read a bit more into what the author wrote than is factually in the blog post. Oh boy.
I think my critique is somewhat correct in that you seem to suggest that this change was made to allow for expanding the permissions from one product to all products, which I don't think one can derive from the things we know.
I think I was somewhat wrong in that I may have suggested that you said this was the only reason (which you didn't explicitly) and also in that I dismissed that they factually can use these permissions from other products now, i.e., no matter whether it was intended or not, the permissions set for other products is broader now.
> [...] though I'm sure they're just beginning to transfer their services to the main google.com domain.
This and the wording across the article imply more than the factual changes. But granted, hooby's comment above is probably more correct than what I wrote.
Are people really surprised when they hand their location off to a domain that any other part of the domain might have access to it? Like, taking away the technical specifics of how location allows actually works, you’ve given the data to the _company_. At the very least, they throw it on an internal service and allow other parts of the company’s infra to grab it.
The only conclusion this article made is that google now has the permission to-do so, and this is 100% correct - motivated or not. Although, given you overly defensive response makes me suspect you have more insight than we do..
This has made much faster systems not being able to process packets at line speed. A classic was that standard Gigabit network cards and contemporary CPUs were not able to process VoIP packets (which are tiny) at line speed, while they could easily download files (which are basically MTU-sized packets) at line speed.