I really like I2P as a project, and I think it gets a lot of things right. For example having every network participant relay some internal traffic, instead of relying on altruism from relay operators, makes it much harder for a single entity to control enough hops to deanonymize users.
Sadly, outside of torrenting I2P doesn't seem to have much traction, losing out to the better funded tor project
Torrenting is I2P's foot in the door for much wider adoption, which would help drive other types of usage. I really hope it takes off, especially since qBitTorrent 4.6 just integrated support; the current stagnation of filesharing, especially w.r.t. needing a VPN for everything, needs some shaking up.
We will still need VPNs to get good connectivity to other residential ISPs until some government entity (or market condition) inspires ISPs to invest in more hardware and peering with each other. CDNs and VPN providers are now sadly a requirement for the internet to work the way we expect it to.
It's already possible (and quite easy) to assign a static IPv6 for incoming traffic e.g. webserver, then configure a private and constantly rotating IPv6 for all outgoing traffic.
Not OP, but I'd expect it's so that separate residential locations can communicate directly with another, bypassing limitations imposed by the ISP, double NAT, dynamic IPs, etc. Depending on region ISPs have purposefully blocked residential users from exposing public Internet services.
"...many Tor users cannot be good relays — for example, some Tor clients operate from behind restrictive firewalls, connect via modem, or otherwise aren't in a position where they can relay traffic. Providing service to these clients is a critical part of providing effective anonymity for everyone, since many Tor users are subject to these or similar constraints and including these clients increases the size of the anonymity set..."
"...we need to better understand the risks from letting the attacker send traffic through your relay while you're also initiating your own anonymized traffic. Three different research papers describe ways to identify the relays in a circuit by running traffic through candidate relays and looking for dips in the traffic while the circuit is active. These clogging attacks are not that scary in the Tor context so long as relays are never clients too..."
Doesn’t that make i2p less vulnerable to an issue tor suffers from - that of hostile-owned relays and exit points? I remember reading something about how a large part of them were controlled by US intel.
That was the parent comment's point. My point is that there are trade-offs. You make the network less vulnerable to that particular attack, and open up other vulnerabilities instead. Anonymous communication service design is full of such trade-offs.
There's a theory that intelligence agencies control many relays. I've haven't seen evidence of it. Tor does its best to be secure even when it can't trust the relays.
I think it would be exactly like that, because that's how it already works for relays. The Tor Project operates a measurement system that tests relays in various ways and records their status in the public consensus document. Clients then use weighted random selection (along with some screening criteria) to choose relays in proportion to how much bandwidth they provide, according to the consensus.
Having something like that turned on by default but being able to disable it would be a good choice.
Ultimately until someone can satisfy a user’s concern about the privacy and security of what flows through their connection there will be scrutiny on this piece. Being able to interject one’s own proxy or vpn tunnel could be interesting.
Right, I think that it could be good. But it's not obvious that it would be good while it would definitely add complexity, bugs, and attack surface, and discourage at least some people from using Tor.
> Being able to interject one’s own proxy... could be interesting.
I think this is essentially what the Tor Project wants to see instead: if you're in a position to do so, operate your own relay and make it your entry guard. That adds capacity for the network and helps you by mitigating the risk of connecting to a malicious guard.
For a long time Skype's P2P architecture worked great. It was the absolute best A/V calling experience, handily beating every other contemporary mainstream offer from MSN to Y!IM. It had issues such as getting crushed by patch Tuesday, and I assume it was likely not robust enough to adversaries, but one thing you can't say about Skype was that its P2P architecture didn't work. It absolutely did.
I think even though there were many things that could've lead to the demise of Skype's P2P network, it was pretty undoubtedly the rise of mobile phones. Android or iOS, Skype was just dreadful; they were clearly ducktaping mobile support in. Sometimes you'd be sending messages and everything would appear to be working, little did you know the other person was responding but you weren't actually seeing anything. Push notifications? Sometimes you get them, but it was a crapshoot as to when you would get them. It probably was expensive to try to bridge their P2P network to mobile phones, and it definitely didn't work very well.
I guess anyone can just say these things and it's really difficult to back them up since a large part of it is subjective (and aside from some reverse engineering efforts, I am not really intimately familiar with much of the details behind Skype and its transition off of P2P) but I think there is one thing that most people would definitely not disagree with: Skype was far more relevant and well-regarded when it was peer to peer. That's not to say that the move off had anything to do with its downfall, more just to say that if it was so awful, I think it would've been the other way around.
Skype has a much worse quality now after the switch for all people I know who used Skype at the time.
The real reason for switching was iPhone. First, Apple did not allow long-running apps in favour of centralized notifications. Periodically start Skype to check events did not help recwicing calls. Second, users moved to smartphones, which depleted the active nodes network. So there has been an increase of short-lived nodes without balanced increase in long-lived. So to prevent the p2p network model to start failing, Skype moved to a centralized one. And probably the government regulations (to store and decrypt messages) took place, but this was probably not very public.
Microsoft buying Skype must have added additional incentives to centralize. Regulatory, organizational control, auditing, etc. And tinfoil take would include surveiling.
In the early 2000s there was a lot of discussion about a US law that required phone providers to provide for wiretapping, with no mention of the underlying tech. Being P2P, that wouldn't work for Skype, so a journalist asked the Skype CEO whether they would comply with the law and he said "We're not US company. Why would we?"
So I don't think it's all that tinfoily to think that once Skype became a US company, the government might have pressured them into making wiretaps possible, since there was a specific law that arguably required it. Considering all that Snowden revealed later, I'm not sure I'd put any digital surveillance in the tinfoil category.
Tor was invented with a purpose of giving users with poor, restricted connections to access forbidden stuff untracked. Think journalists under oppressive regimes in the third world, or, of course, undercover agents.
I2P was invented to give people with relatively good, usually excellent, connectivity a way to access forbidden stuff untracked. Think media "piracy" first and foremost, also dark-grey market stuff, etc. Everything else is better served by either your own VPN, or the public Internet.
One of the reasons people were wary of Freenet, where every user would participate in the hosting, was that if the encryption algorithm was ever broken, people would undoubtedly be revealed to have been passing CSAM along. Unwittingly, but still. Does I2P’s model not spark the same concerns?
Do ISPs worry about relaying TLS traffic because if the encryption is ever broken, it will be revealed that they were relaying CSAM? There isn't any difference to an i2p node relaying encrypted data and and ISP doing it.
ISPs are, especially in the wake of mergers in many countries, large corporations with lawyers. An I2P node might be run by an individual who lacks that security, and this is just something that can be used against that person if he comes to the attention of the authorities for whatever reason. It’s like the risks of individuals’ running Tor exit nodes, which are well known.
It's the same risk as running a Tor exit node: you getting arrested, having all of your electronics confiscated and facing life ruining charges because someone was caught downloading illegal content from IP addresses that point to your servers.
Even if charges get dropped, or you win in court, that's quite a burden.
Does I2P torrenting operate at remotely comparable speeds to without?
I recall hearing that it was vaguely frowned upon with Tor back in the day for saturating the networ and it didn't seem like there was much reason to use it over basically any VPN, especially speed wise, assuming your motive was to avoid copyright notices.
If you want to avoid copyright notices, then you need to do it for the payload too.
That's because a copyright holder could easily host a copy of some pirate film on your new network, and then just see the destination of the data packets.
Wait, but if the copyright holder is sending the actual payload to you as a client, couldn't you make the argument that they implicitly authorized you to have it?
AFAIK, cops can't deal you actual drugs and then arrest you for it.
It's something Prenda Law got up to and is among the things that did them in.
> An expert witness affidavit stated that IP addresses linked to Prenda's Minnesota and Florida offices and John Steele, had themselves been identified in 2013 as the initial "seeders" (sharers) of some pornographic media, tagged for "fast" sharing on file-sharing networks, which would be followed up by threat of legal action
"Entrapment" is actually quite narrow. IANAL, but my understanding is it requires showing that you wouldn't have done the illegal activity without the involvement of the police.
Harassing a person for months to buy weed qualifies; posing as a drug dealer and offering drugs to the people who pass by you, does not.
I2P has a community of people that actually use the darktubes, as opposed to Tor. 99% of Tor users use Tor to browse the vanilla internets just. There is no real 'Tor community'
I remember I2p torrents being horrendously slow, like 20kbps at max(do you even remember the last time someone used kbps as units?) for popular stuff. Has that changed in the last few years?
I get on the regular up to 200-500kbps depending on number of peers, and their router configuration (I have my bandwidth configured above the defaults, which are very low).
I understand that is unheard of from someone living in the "instant access era" but their protocol has a cost.
This appears to link to the C++ version of I2P. Not the original "Official" Java version which is more complex and has much more built-in features: https://geti2p.net
Not really. Java was built from the start to run on microcontrollers and is still running fine on billion of small phones. It is now powering the apps inside a billion android devices.
You are likely referring to awful enterprise frameworks like Spring that make a lot of noise. Some ten years ago it was JBoss giving a bad fame to Java. You won't be finding those enterprise frameworks being used by most open source projects.
For those cases Java is kept clean and fast, as it should.
Exactly. Java itself is among the fastest, and can be quite lightweight. It's primarily the heavy and slow frameworks (especially "enterprise" frameworks) that have absolutely abysmal performance .
Okey, Can you then provide me some info about those Java compiler/LVM stuff?
Last time I used Java was in time it had Sun Microsystems logo :) It was sth like Java 1.5.x. JRE installer was 16MB, so for my standards its already quite heavy.
My old ruby static that I provide with scripts is under 1MB. I wonder if I am missing something...
Depends on app complexity and libraries to be included. The most complex/heavy app that I've compiled from Java to native code was around 30 MB.
Compiled as JVM bytecode the same app is around 16 MB. For my standards this is quite OK when considering the maintainability across the next centuries.
I was a Java true believer in the late 90s/early 2000s, working on LimeWire, the most popular Java desktop app at the time (maybe of all time?). I thought we were close to a time when garbage collection was just as fast as manual memory management, and runtime re-optimization could make up for the extra time an ahead-of-time compiler could spend on optimizations. (Yes, I've read all of the Garbage Collection Handbook, and am aware of the speed advantages of bump allocation and the advantages of memory compaction. However, when it really matters, people still sidestep Java's GC using object pools and/or size their heaps such that they never collect while the stock market is open.)
Then, I started slinging C++ for the Google indexing system.
I still hope we get to a point where compiling to native code is as rare as hand-writing assembler is today. I hope we distribute code primarily in a format optimized for native translation, SafeTSA or similar. (Though, I'd hope we get install-time caching of native code generation, similar to AS/400 TIMI / current Android Runtime.)
However, until compilers get very good at statically inferring lifetimes and statically scheduling object collection, I hope garbage collection is optional and freely mixable with manually managed objects. (Yes, statically determining minimal lifetimes in the general case is equivalent to solving the halting problem, but we can be conservative and fall back on GC in the statically-unsolvable cases.)
The main drawback of Java for most applications is that with garbage collection there's a time-space tradeoff. As a rule of thumb in order to avoid frequent major collections, a Java program is going to use about twice as much memory as an equivalent C/C++ program.
When it really counts, we're still not at the point where Java is faster (even after warm-up) than expert-written hand-optimized C/C++ with profile-guided optimization.
Don't get me wrong. I understand the development velocity advantages of Java over C/C++ can often more than make up for performance differences, and good C/C++ developers (especially with domain-specific skills) are rather expensive to employ. I worked on equity trading systems in an interpreted language, where we replaced a lower latency system written in Java. The high-level interpreted language enabled a very rapid turnaround time and was easier for Statistics/Physics PhDs to implement models. The better models resulted in better average prices despite the system reacting more slowly to incoming data.
Ideally, I'd like to see something Elixir-like with good interoperability with something Rust-like for the parts that use a lot of CPU time and/or a lot of memory, all compiling down to a SafeTSA-like compressed control flow graph representation designed for fast native code generation.
For those who wondering: best practical use of I2P is to tunnel SSH access to obscure devices behind NAT where you can't or dont want to use something like Tailscale. Or imagine you have that torrent box you using for seeding obscure book or music collection. You can pay for the server with crypto, but I2P is good to make sure you can access and configure it privately.
Have you heard of yggdrasil then? It sounds like it would be a better match for your use candidate
yggdrasil is a "greynet". End-to-end encrypted, self-organizing via DHT, but no onion/garlic routing. Has interop capabilities with both Tor and I2P though, and some yggdrasil nodes are I2P- or Tor-only. A world-tree with roots (tunnels) going in all the spheres of existence (nets)
Hands out IPv6 addresses to its users. These addresses are generated automatically from the signature of your public key, so essentially impossible to spoof, and automatic authentication, plus end-to-end encryption. As if IPsec was pervasive and completely transparent
I'd say "best" practical use is those of people under threat of institutions and nations and it works well for simple access. It even has a stealth mode for censorship regimes in which your router doesn't advertise itself and lays down.
Years ago I tried I2P to test the limits of anonymity it can provide. It’s sad that it doesn’t seem to have much funding, because it’s far superior to Tor by all means. The guys worked really hard on the theory before implementing it. Still, the UX of the router was really bad. It really needs a standalone binary to work flawlessly and performantly across all platforms, not to mention the need for a GUI which doesn’t require you to know many technical concepts beforehand. Current router is written in Java, and I hoped i2p-rust would catch up, but it seemed a half-dead project.
Well, there's no problems with Java, except, as you said, willingness of volunteers to support it. It's much easier to inspire people to try a shiny new language.
However, I would say that Rust/Go already moving out of the spotlight for that purpose. For the hype we'd look towards Zig or Nim or something I've yet to hear.
How straightforward is it to create new circuits using I2P? I'm curious whether this is supported by the API and the time it takes.
For context, I'm developing a voting system [1] where votes are signed pseudonymously and must be transmitted over an anonymous channel. Additionally, it's vital that no two pseudonyms use the same anonymous channel, as this would weaken the anonymity.
I was sad to see i2p's maintainer zzz quit the project after he got some pushback about politics I think. Reminded me to be thankful for all the unpaid hard work open source maintainers put in.
One of the maintainers added an inclusion statement onto the footer of the website and a few developers thought it was pushing an LGBT agenda so they quit. This was from a quick search, I’m sure there was more to it than that - one part was that some were mad they weren’t even consulted before it was added and misrepresented their views.
That inclusion statement seems very badly thought out and quite Western-centric. For many people in oppressive states who rely on censorship-resistant and privacy-guaranteeing software, being a dissident isn’t necessarily about defending minorities or trying to make the world a better place. It’s often about retreating into a private world with your fellows and trying to live your life as best you can under the circumstances. So, IMO it’s better not to specify any specific social-justice goals for projects like these.
I disagree, because I think in the definition of the word dissident it's not enough to oppose a system mentally. It needs to be challenged actively.
What you describe sounds more like what I would describe as society dropouts
In English, "dissident" has been commonly used for decades to describe opponents of a regime (usually the USSR or other Eastern Bloc regimes) whose activity has been limited to distributing underground literature, or organizing cultural events in the private sphere instead of in the state-controlled venues. Only a small minority of dissidents publicly challenged the regime.
Or not even agreeing. It has been fairly common for dissidents in the USSR and Putin’s Russia to express outright scorn for people who advocate for social justice, especially LGBT whom even dissidents might dislike. Social-justice advocates are seen as naive dreamers. Also, while people who support the regime are odious, those who actively work against it might be accused of allying with the country’s enemies.
As I said, the favoured course of action for some dissident communities is instead retreating into the private sphere and trying to live one’s best life there. I have heard that this is a common attitude among dissidents in China, too.
> the favoured course of action for some dissident communities is instead retreating
I'm not sure that this kind of passivity can properly be described as "dissidence". Surely dissidents are people who speak up, taking a risk with their own security?
At any rate, I don't want to quibble about semantics. If you disagree with your government, but aren't prepared to speak up, then you're at best getting in the way. Passivity is what authoritarian governments depend on, so passive "dissidents" are like collaborators.
That kind of passivity can most definitely be described as dissidence. Those Soviets who circulated literature through samizdat, who put on performances of disapproved modernist music or poetry in their own flats to a small circle of peers, etc. are commonly described as dissidents even when they never publicly challenged the authorities.
The claim that such dissidents are collaborators is, again, Western-centric. Dissidents can and have argued that the regime's internal contradictions will eventually undermine it, without them having to take actions that put themselves at risk or leave them open to accusations of aiding the enemy.
> GP spoke of people who retreat into what seems to be passive silence.
I said people who retreat into private words. Samizdat was a private world. Events held in people’s homes was private worlds. Writing non-conforming literature or music “for one’s desk drawer” was a private world. Modern dissidents using censorship-evading, privacy-guaranteeing software to enjoy community are in private worlds.
Calling such dissidents “part of the problem” is not helpful. There have been famous cases where Westerners’ demands for how dissidents should behave, actually pushed dissidents closer to the regime.
Tor Browser is Tor's killer app. I2P needs a secure simplified fingerprint-free browser that only does basic HTML, otherwise you're just asking for trouble.
Is there anything Tor related built into that browser?
Couldn't I2P users also just use the Tor Browser and get the same benefits of less fingerprintability?
This has been my experience as well, which is a bummer because I think hidden services are the best part of Tor, and my understanding is that I2P is basically designed with hidden service like features in mind from the ground up.
It seems like something like this would be great for people living under authoritarian regimes--feels like making I2P dead-simple would benefit a lot of people and help to make censorship more difficult.
I am a long happy user of authenticated TOR hidden services for my secure admin access to ssh servers and some self-hosting services, and it is my last hope when even tailnets, meshnets fail to reach. I2PD c++ IP2 nodes are very helpful as backup authenticated hidden services for some of my servers, as in general TOR is more stable but sometimes I2P can work around when TOR fails.
Tor is simpler, better audited, and I don't mind too much the little centralization of the authority TOR nodes, plus the pluggable transports.
I2P is more complete (UDP, protocol libraries, nice hidden service client and server port handling, etc) its somewhat chaotic decentralization is a mixed bless, but that's the point. I like the tradeoff of mixing my bandwidth with others bandwidth (pay with some bandwidth now to save my rear-end later when needed).
I2PD c++ node is pleasant for me because it is compact and clean for my needs (authenticated hidden service SSH access and self-hosted web services) and I can manage it almost like the TOR node. The original IP2 Java node is good for end users, handy with integrated IRC, email, and file sharing services.
I2P actually has a functional network, Veilid has just launched, and isn't really available to the public in a meaningful way.
Outside of the practical, I2P is built entirely in Java, Veilid is built in Rust, so potentially more performant, Veilid uses modern ciphers so is potentially more secure, Weilid is potentially easier to modify and integrate into apps, and Veilid locally encrypts its storage, I2P does not.
So, realistically, it's a more modern take on I2P, designed to work on mobile, improvements are subtle, but might help create additional adoption if they can get it into people's hands.
It does. The VeilidChat app is built on top of the general-purpose Veilid application framework. Chat apps were the first, easiest things to make on that framework, but there's nothing inherently social media (or even messaging) oriented about it.
Sadly, outside of torrenting I2P doesn't seem to have much traction, losing out to the better funded tor project