Hacker Newsnew | past | comments | ask | show | jobs | submit | karambahh's commentslogin

I'm working in this space, reach out to me and we'll talk. Entering the EU market with this is going to be extremely touchy, the privacy protection agencies are going to mow down your clients with fines if you're not _very_ careful....


I'd be happy to discuss this further and learn from your experience. Please reach out to me at oleh.savchuk@prepin.ai and we can schedule a short call to chat about navigating EU market entry properly. Really appreciate the warning!


If there is any 'decision' being made with that screening, you can't run this with GDPR. You can't have the 'machine' make a life-impacting decision such as employing someone, there is no 'explainability', no re-do by a human. Someone with time to spare can ramp up your legal costs in court, and they won't have to pay a dime/cent even if they lose.. because GDPR (also the EUDPS's people will have a party on your expense).

Also considering that your 'AI' collects and processes, voice, name, PII, perhaps financial data, etc... good luck with the DPIA!


You raise excellent points about GDPR compliance. Just to clarify - Prepin is a mock interview practice platform, not an actual hiring tool. Users practice with our AI to prepare for real interviews, so no employment decisions are being made. That said, you're absolutely right about data processing concerns - we still handle voice, PII, etc. during practice sessions. We'd definitely benefit from your expertise on proper GDPR compliance for AI-powered interview practice tools.

Would love to discuss the regulatory landscape further - please reach out to oleh.savchuk@prepin.ai when you have a chance.


This tool is made by the french OSS company Linagora, active in that space for more than 20 years.

They mostly likely have thousands of users running on corporate or state platforms so it's not a "one off project" supported by a couple individuals, it's an actual stack with probably a large dev team.

However, Linagora has been entangled in a legal battle for 10 years with former employees that founded BlueMind[0], a competing offer. Latest episode in the saga appears to be "back to square one" with the legal case coming back to a fresh start based on complex legal issues [1]

[0]https://www.bluemind.net/

[1]https://www.zdnet.fr/blogs/l-esprit-libre/blue-mind-linagora...


It looks like most of their products are just re-badges of existing open source products, including, for some reason, a java-based email server.


Twake Chat is a Matrix client, seems to be a fork of https://github.com/krille-chan/fluffychat

Twake Mail backend is a fork of https://github.com/apache/james-project but it seems their app can work with any JMAP server.

Twake Drive (web app) and Twake Mail (mobile) seem original.


Yeah, and especially as Vanta is adjacent... I think a rebranding is in order.

Vanta (and the auditors they market) is a nice company I'm happy user of but I'm afraid they won't be too pleased with this.

Your project is a pretty nice overview of what network level monitoring encompasses, I'd say it's more than a tool, it has obvious educational value. Would be sad to see it buried under naming issues.


I'm on the market for a decent laptop. Don't want to side-line the thread, but is Arch supported decently on, say, Dell or any "enterprise grade" laptops?


Short answer to a pretty broad question: Yes

More color: I was happy running Arch on a 2012 vintage Dell Latitude (Intel, integrated graphics) for several years. I'm currently quite happy running Arch on a Lenovo Thinkpad T14s (gen2, AMD, integrated graphics).

Arch wiki does have many pages about arch-on-a-particular-model to help once you get a short list of models you're interested in, like this: https://wiki.archlinux.org/title/Lenovo_ThinkPad_T14s_(AMD)_...


I haven’t tried much, but as long as you avoid nvidia or fancy laptops with weird components, you will be good. My recommendation is to go for business line, as they have more standardized peripherals. Better if there’s some linux support guarantee.


If in doubt, search the Arch forums for posts about the model you consider to buy. Best case: Some threads come up, but all problems could be solved. Worst case: No threads, or a lot of threads about obscure errors.


I have a Dell Vostro 7620 currently running Arch. Even with the Nvidia graphics card I have run into very few issues (only once did a nvidia driver update did break the system), so I'd say go for it.


As mentionned elsewhere, 30h of being "in the zone" is already above average.

Above that, the teams will be present, or rather their bodies will be, but non productive.

Death march is a real thing and it results, as the name implies, in death.

As a manager, if you choose to run your team in death march mode, then it implies reaching your goal is more important than the well-being of your team and the ability to function long-term. Short term is... short term.


AWS East going down will (and has) cause(d) disruption in other regions. Last time it happened (maybe like 18 months ago), you ran into billing and quota issues, if my memory serves.

AWS is, as any company, centralized in a way or another.

Want to be sure you won't be impacted by AWS East going down, even if you run in another region? Well, better be prepared to run (or have a DRP) on another cloud provider then...

The cost of running your workload on two different CSP is quite high, especially if your teams have been convinced to use AWS-specific technologies. You need to first get your software stack provider agnostic and then manage the two platform in sync from a technical and contract perspective, which is not always easy...


You just made the single point of failure your software stack hardware abstraction layer. There’s a bug in it, you’re down. Everywhere. Not only that, but if there is CS in either your HAL, or your application you’re down. So to get the redundancy the original commenter was talking about, you need to develop 2 different HALs with 2 different applications all using a minimum of 2 different OS and language stacks.

Why multiply your problems? Use your cloud service provider only to access hardware and leave the rest of that alone. That way any cloud provider will due. Any region on any cloud provider will due. You could even just fallback to your own racks if you want. Point is, you only want the hardware.

Now to get that level of redundancy, you would still have to create 2 different implementations of your application on 2 different software and OS stacks. But the hardware layer is now able to run anywhere. Again, you can even have a self hosted rack in your dispatch stack.

So hardware redundancy is easy to do at the level the original commenter recommends . Software redundancy is incredibly difficult and expensive to do at the level the original commenter was talking about. Your idea to make a hardware/cloud abstraction layer only multiplies the number of software layers you would need multiple implementations of, shadow run and maintain to achieve the hypothetical level of redundancy.


That sounds nice in theory but how do they solve inventory split?

Inventory levels for SKUs across multiple warehouses is notoriously difficult to balance, even more so for smaller (Shopify mom&pop) retailers.

If I split my inventory across multiple packers, the likelihood to get "local-ish" delivery is getting smaller and and smaller.

What I get, however, is a scalable workforce for packaging/shipping. It can result in economies of scale if and only if all the order can be shipped from a single packer.

By the way, if they use anything else than ultra local carriers, the package will anyway be sent away to a sorting center dozens or hundreds of kilometers away from the packet, if even the customer is right next door.

It's a good idea if you want to "uberize"/get a scalable packaging/inventory workforce, but it seems to me to be disguised under a green-ish marketing varnish that quickly fades away once you look at it closely.


France Telecom bought Orange (then a UK company) and decided several years laters to rebrand the french "Itineris" mobile brand "Orange" and progressively did the same across most of its business lines, both domestic and abroad.

Everything from landlines to datacenters to service centers is operated under a unified brand, Orange, originally stemming from this UK mobile operator. (I think there's a few things here and there such as live tv broadcast that have not been rebranded but I'm not entirely which and why).

The "France Telecom" brand has, I think, completely disappeared.

Funnily enough, through hoops of corporate history and divestment, there are now completely separated business entities in completely different sectors than telcos that operate under the "Orange" brand, such as Orange Bank in several countries, mostly sub-saharan Africa, that do not have anything todo with Orange/France Telecom anymore.


And, as every communications systems, it was abused (as mentioned briefly in the WP page) by stock brockers between Paris and Bordeaux via a side channel attack.

They used it to covertly transmit swing in trading prices between the Paris stock exchange and Bordeaux. If memory serves, they used unused symbols or abused error correction, I don't remember which, but from a technical standpoint it was pretty advanced and covert.

I seem to remember they only got caught because somebody snitched them when they suddenly got veeeery good a predicting prices in Bordeaux, some 400km away from Paris.

It appears transmission speed for a symbol was on the order of magnitude of 7 meters per second, and full messages travelled 400km in 9 minutes instead of 4 days by horse. So speed was a ginormous improvement despite low bandwidth and very high capex & opex and limited operating hours.

It operated from 1793 to 1854 in continental France.


It seems to me that the refractive index plays a role: couldn't you increase resolution by replacing air by another medium every other cut? Say air, water, air, etc.

My reasoning is that you'd increase the resolution without adding too much technical complexity.

My maths is too rusty to evaluate how it would mess with the gray code though.

Very nice idea


I didn't include this in my article but I did some experiments early on (for a different idea) with air bubbles in oil inside a Teflon coated tube but that presented a lot of challenges (mainly the bubble breaking up) that made it not ideal for something like this.

This can certainly be miniaturized with the right manufacturing techniques but I left that for the future.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: