I'm a happy user! for sure there are some small basic things I'd love to get (better filtering, user editing, etc...) but overall they get the job done.
Good luck!
Oh nice! Thanks, yep there’s lots to improve. If you haven’t already, drop us a line in slack or live chat and we’re happy to log feedback then we’ll ping you when it makes its way into the app.
Not sure if this is a good or bad signal. The others don't offer free no CC plans because as soon as you get even the smallest bit well known you get swamped with spammers signing up.
GDPR has nothing to do with cookies or local storage. They are just mediums that are potentially impacted by GDPR.
GDPR simply makes collecting personal data without consent illegal. This is why a lot of American centric sites block us from accessing them, they want your data, and they don't want to ask for it.
> This is why a lot of American centric sites block us from accessing them, they want your data, and they don't want to ask for it.
Also, requiring it when it is not technically required for the product is illegal. So even throwing up a splash screen for EU visitors with a single "allow all" button would be illegal.
The GDPR is actually a quite well designed law for what it tries to do, its just that enforcement lags behind.
You say "or" but then just give examples of what I said.
> law infringement implications
They comes from using people data in ways that you have no asked permission for. They don't want to ask for it because it's quite hard to spin "we want to mine your data for a Cambridge Analytica style social manipulation".
Adaption isn't that difficult, the cost comes from people saying no. They don't want to give that option.
yes, well I dont assume bad intention from every single website there, for some just dont justify the costs and possible headaches.
> You say "or" but then just give examples of what I said.
> law infringement implications
No, you mean law infringement implications, like they actually selling your data, I mean some law office, running behind companies not compliant and trying literally money extortion against them.
I generally found much less accurate as something like Plausible, it seems Cloudflare default analytics are more like where requesting are coming from.
> Should be GDPR compliant since they don't use cookies or local storage.
That’s not how it works. If there’s personal data being transferred to the US, you are in violation according to the Schrems II ruling. If you only collect non-PII, you should be fine.
Make sure though that your definition of PII matches the regulator‘s definition.
The only personal data that you can get from HTTP requests without doing tracking or fingerprinting is the IP address, which Cloudflare also isn't using.
Cloudflare Web Analytics is extremely simplistic and does not allow for any persistent identification of users or storage of personal information. It uses HTTP Referrers to count visitors and that's it.
One could argue that since it's a US-based company it can't be Shrems II compliant, but you can make that argument about a lot of things.
As a US-based company, they process (even if they don't store) the IP address. As such, the personal data of the EU users is transmitted under the control of the US Surveillance Act. No SCCs nor commercial contracts can shield this data.
You might have a legitimate interest in processing the IP, but because of the aforementioned issues, you cannot provide sufficient controls nor protection of Personal Data.
As such, using Cloudflare as your Data Processor, exposes You, the Data Controller, to DPA scrutiny. As always with GDPR/DPA and EU, whether it is illegal/non-compliant depends on each DPA.
Great for plausible.io of course. But what is the difference for the end user?
- GDPR: hosted in EU vs US, so your data is traveling less far. The things the plausible can do with the data is more or less the same.
- No cookies: don't see the point of that tbh, they will probably perform even more invasive tricks like finger printing to replace the cookie requirement
Bottom line, the website visitors data is still logged, stored and tracked - only now with a different actor.
It’s like two dudes developing the solution and more importantly, charging you for it. If you don’t see the radical difference in incentive structures, then I don’t know what to tell you.
Sorry, had a bit of time left today. Its more like 7 dudes, and their whole proposition is underwhelming TBH. Mostly gratuite statements against the ruling order. Half their website is a rant against the 'capitalist' competition. And the whole Christmas tree of doing good is exposed. But nothing really sticks:
- Simple and easy: wait until the product matures
- Open source: but no foundational governance, like Apache for example.
- Promise never to sell to investors, but nothing is in place to actually prevent that from happening. Note this common practice via a social enterprise.
- 45 kg reduction of CO2 compared to Google per average website(!): clear violation of EU law (2006/114/EG) in my opinion.
- They suggest to proxy their service to circumvent consumers who actively block traffic to plausible. This is OK, because they are good.[0]
> they will probably perform even more invasive tricks like finger printing to replace the cookie requirement
It's clear you didn't even bother to look at plausibles data policy [1] before assuming what it does and doesn't collect.
The TL;DR: it does not fingerprint, and it does not collect any identifiable information, be it about your device or your person.
> Bottom line, the website visitors data is still logged, stored and tracked - only now with a different actor.
Only basic device info is logged (not even IP addresses are stored). And it's very easy to self host so that different actor may be yourself.
I indeed do not know Plausible and any of their motivations.
Google Analytics also does not provide PII to their end users per se. But I have seen many tools and solutions do just about anything to circumvent that. Merging analytics with transactional data and site logs. Adding company info to visitor data. There is an entire industry there.
So, an imaginable use case would be to self host it. Intercept to circumvent the limitation.
The reason why I am so cynical is not because of the motivations of Google Analytics or Plausible. It is what motivates the end users, the companies who are using these statistics.
I do know Plausible, and their motivation is to make a sustainable business providing basic web analytics, which is why they charge for their service and Google doesn't. The data they provide to the users of their service is like an order of magnitude less detailed than what Google provides.
I get the cynicism about the industry in general since Google led this merger between web analytics and advertising, but there are plenty of providers in the analytics space that aren't following that path.
But then you still do the same thing, but you host it yourself. Meaning: it is installed and left running for years without updates and monitoring. I then rather have Google handle things.
This can work extremely well for one or two people. It becomes a problem when different people need to agree on what are the 10 things, categorization and maintenance.
And even when defined, at some point some document will be “in the middle”, one coworker will place it in 10, the other in 50. Has happened to many much more times that I can remember
The impact of culture is another critical data-point. For example, in some cultures, elders are viewed as being kind, wise and important while in others, grumpy and useless.
I strongly suspect that there is, I've seen this effect in play so many times at smaller timescales, and you'd expect to be even stronger at longer ones
In primitive cultures, reaching old age could be seen as an achievement, given the difficulty of surviving without a healthcare system. In most modern societies, however, aging isn't as challenging and elderly individuals are often "locked away" in retirement homes. In other words, they essentially disappear, living a life of a ghost. Moreover, what kind of wisdom would you expect from someone who spent their years merely waiting for the day of their retirement?
> Moreover, what kind of wisdom would you expect from someone who spent their years merely waiting for the day of their retirement?
The kind of wisdom that they have earned dealing with the daily practicalities of personal life, society, law, bureaucracy and in general the knowledge of navigating the winds of change for many decades. But in order to get the benefit of that wisdom, you have to value them as individuals with valuable experience instead of thinking of them as someone who spent their years merely waiting for the day of their retirement.
Perhaps the wisdom of advising that is not a good idea. And also that young whippersnappers who think that that is what the majority of old people spend their time doing may not be paying much attention to the world.
Reaching old age is still an achievement, especially for young male motorcycle riders, and experience of society and its history always comes for free. The "locking away" stage is something old people generally try to actively avoid, and mostly generally for those with actual dementia in any case.
I'm always curious as to how global these findings can be, the titles certainly don't narrow it down making it sound universal findings. Yet, could it be that these findings are universal and yet culture teaches us to ignore these changes?
A lot of Western culture. The expression "ok boomer" being a prominent example. I'd say any place where "traditional family values" are being eroded, so mainly wealthy urban areas.
This is true, but I would add an asterisk. While not knowing Erlang is not a hard requirement, eventually, you will greatly benefit from being able to read and write some basic Erlang.
That said, learning Elixir first will at least get you in the right mindset, and then Erlang becomes an easy lesson in syntax.
After you work with Phoenix/LiveView is soo hard to come back to anything else. I love the fact it's very opinionated and most times there's a right way to do things.
LiveView is game changer, same user experience (even better) with 10x better developer experience. No much dual state management and things just work.
While I agree wholeheartedly, I do want to point out that while it has opinions in its high quality documentation, it is in no way opinionated in the same way that something like Rails is. You can use Phoenix in the same way you'd use Sinatra or Flask or the like [0], there are just no generators for it. I just wanted to bring this up in case anyone would be put off to try it who is afraid opinionated frameworks. However, if you prefer the opinionated option and want to be given clear guidelines on how to solve many common problems, Phoenix does indeed have you covered :)
I'm always amazed by how malleable the BEAM and the patterns built on it can be. I don't think anyone predicted Erlang to (IMO) reign supreme in frontend development. I say this having written tens of thousands of lines of React and Svelte in production, on top of all the hobby projects over the years.
In hindsight it makes sense. Networked servers need resilience to failure, to keep running, and to make concurrency as painless as possible, especially as CPU cores count increases.
Off the top of my head there's not many languages that can easily handle a million processes on consumer hardware, while the developer only has to think in single threaded mode because deadlocks and data races are literally impossible.
If you're building a server of any kind, the BEAM is the bee's knees. You can always resort to using a sidecar process or a Rust NIF for the high performance hot path.
> million processes ... think in single threaded mode
This is a big component of the secret sauce. Writing top down, happy path code as if you're just exploring an idea and then decide to scale it to distributed nodes without changing the implementation is just absurdly practical and blows every other VM out of the park.
> while the developer only has to think in single threaded mode because deadlocks and data races are literally impossible
Does Elixir somehow automatically solve the case where a row in a database is loaded into memory simultaneously across multiple requests and a value is incremented? (can be easily solved with row locking, but still needs to be done explicitly at the application level)
Usually one would use Redis for this problem, and you have a Redis-like system (but based on pattern matching) native to OTP, ets.
If you need a global, cluster wide counter, spawn a process with name {:global, :whatever} and let it be the source of truth for this value.
Depending on the problem, there are multiple approaches you can take. And a single Postgres instance is able to deal with massive concurrency, there might not be any need for premature optimization when SQL transaction could do.
You get a concurrent request processing spread across all of your cpu/vcpu cores out of the box. The fallback controller pattern for error handling is incredible for boilerplate error handling. Worst case latency is very stable. The query builder/data mapper, Ecto, is IMO far better than ActiveRecor being more explicit and prevents N+1s out of the box. Eex is built on compile time linked lists rather than run time string interpolation like the options in Rails and Django.
> You get a concurrent request processing spread across all of your cpu/vcpu cores out of the box
That's neat, but this doesn't matter until you reach serious scale, as scaling a rails app horizontally by throwing more server instances works for a long time
> The fallback controller pattern for error handling is incredible for boilerplate error handling
Sounds just like controller inheritance in rails
> Worst case latency is very stable
So is rails, worst case latency is generally caused by slow SQL requests or having to render complex documents (which can be offloaded to the background easily)
> The query builder/data mapper, Ecto, is IMO far better than ActiveRecor being more explicit and prevents N+1s out of the box
I don't have a problem with ActiveRecord, and while N+1s are easy to create, there are a ton of tools to help prevent these in rails. Can be a hinderance for junior devs or devs without rails experience though
Once things get complex, you're gonna be writing SQL directly anyway
> Eex is built on compile time linked lists
Cool but sounds irrelevant for 99.9% of cases, string interpolation isn't what causes rails apps to be slow
I'd say those points deserve a deeper look. Take concurrency for example. It is not only about scaling, it can actually affect every step from development to production:
1. Development is faster if your compiler (or code loader), tasks, and everything else is using all cores by default.
2. You get concurrent testing out-of-the-box that can multiplex on both CPU and IO resources (important given how frequently folks complain about slow suites).
3. The ability to leverage concurrency in production often means less operational complexity. For example, you say you can offload complex documents rendering to a background tool. In Elixir this isn't necessarily a concern because there are no worries about "blocking the main thread". If you compare Phoenix Channels with Action Cable, in Action Cable you must avoid blocking the channel, so incoming messages are pushed to background workers which then pick it up and broadcast. This adds indirection and operational complexity. In Phoenix you just do the work from the channel. Even if you have a small app, it is fewer pieces and less to keep in your head.
At the end of the day, you may still think the bullet points from the previous reply are not sufficient, but I think they are worth digging a bit deeper (although I'm obviously biased). :)
1. makes sense, our app is limited locally because docker on mac is not fast
2. Our test suite is pretty good! It's limited by the longest test runs (basically selenium tests that are slow because browser interactions are slow), circleCI allows pretty easy parallel testing
3. I think the same reason we offload long-running rails processes still applies . The only thing a long running rails request blocks is further requests to that particular thread handling the long request. Usually some other request will end up getting queued to that thread which is the issue. So it's a load balancing issue, as well as a UX issue (you don't want an HTTP request to take 3 mins loading a long report). Unless Elixir can indefinitely spin up new threads, this is still a load balancing issue, determining which server incoming requests are routed to
We don't use action cable so I can't comment there
2. You should be running multiple Selenium instances (or equivalent) in parallel even on your machine (unless you run out of memory or CPUs).
3. Exactly. This is not a problem in Elixir. If it takes 3 minutes to render a request, all other incoming requests will progress and multiplex accordingly across multiple CPUs and IO threads. This also has a direct impact on the latency point brought up earlier.
Let me quickly address these to the best of my ability, knowing Jose's answers are probably better :)
> That's neat, but this doesn't matter until you reach serious scale, as scaling a rails app horizontally by throwing more server instances works for a long time
You can do that, but its cheaper to get more out of each cpu and Elixir/BEAM give you that for free with a similarly flexible dynamic language.
> Sounds just like controller inheritance in rails
Not exactly, it works on the basis of pattern matching and the Fallback functions are included into the plug (think Rack) pipeline. This makes it faster and you don't have the problems of inherited methods stepping on each other. You also get to match on really specific shapes and cases to handle really granular errors without much effort or cognitive overhead, and you don't need to do things like catching errors like people often do in Rails controller error handling with rescue_from.
> So is rails, worst case latency is generally caused by slow SQL requests or having to render complex documents (which can be offloaded to the background easily)
You elixir application will often be doing things like background work and managing a key value store. You can do all of this and saturate the cpu without latency exploding. The scheduler in the BEAM will de-schedule long running processes and put them in the back of the run queue. Again, you get this for free.
> I don't have a problem with ActiveRecord, and while N+1s are easy to create, there are a ton of tools to help prevent these in rails. Can be a hinderance for junior devs or devs without rails experience though
That's all well and good but it's a nice feature in Ecto. Ecto also hews closer to SQL, and you can compose reusable pieces of queries in a way that is far more manageable than anything ActiveRecord scopes offer. We (where I work) write anything short of complex CTEs in Ecto's DSL, a lot of stuff I'd never try to do with ActiveRecord. It's just a lot closer to SQL and gets some nice compile time assurances.
> Cool but sounds irrelevant for 99.9% of cases, string interpolation isn't what causes rails apps to be slow
Rendering collections of nested partials in Rails has always been slow and eats memory. This isn't an issue with EEX. They also render faster locally.
Whenever I visited /r/all, at least half of the posts were political posts. Either antiwork spinoffs or screenshots from Twitter/Facebook with the thread hating on the person's political views. Hardly marketing friendly.
One period people would find tiny subreddits, post something, share it on private Discord servers and have people vote it to the frontpage. Trying to avoid political subreddits as a user was like playing Whac-A-Mole.