I wrote this to publish Org docs to S3 - https://github.com/EnigmaCurry/s3-publish.el - I wanted something extremely lightweight, not even wanting to commit things to git like I normally would and waiting for CI to build something. Uploading html directly to S3 means it gets published in <1s from push.
That's neat! For org, if it had an option to generate the HTML file name from slugifying the org file name instead of the salted hash, it could be fantastic for rapid lightweight blogging.
Fair enough. I haven't used an Android device since 2017... Do people have these issues on iOS too?
On Linux, I have no problem running either bare wireguard or tailscale alongside Forticlient. On Windows and macOS it's a bit more janky, specifically the DNS resolution, but I don't daily drive these platforms so I may be missing some kind of knowledge to fix this.
On a linux box, is it possible to run tailscale/wireguard as an exit node along with Forti vpn?
Aka what I want to achieve is (my-machine + tail/wireguard) --> (server with tailscale/wireguard + forti vpn) --> Corporate network. So wireguard or tailscale to receive traffic and forward it through forti.
Or another option (my machine fortivpn over tail/wireguard) --> (server as exit node) --> corporate network
Rather than using the official forticlient I am using https://github.com/adrienverge/openfortivpn. It has some options to configure custom pppd/routes/dns etc if necessary, which I have not touched as I don't know enough :P
DNS resolution is not important for my usecase, only traffic.
I have heard not so great things about Forti VPNs, sorry to hear you have to work with those.
In theory, as long as the Forti VPN does not overlap with the Tailscale IP address range, the simplest solution is to just run Tailscale and openfortivpn on a single node. You can then advertise the Forti VPN subnets within Tailscale, that's effectively what my image does as well in a nutshell, except that it's parsing the WireGuard config and setting up firewall rules for convenience.
Tailscale does NAT automatically by default, so it will look like all traffic is coming from the openfortivpn client itself.
When I just try to run tailscale and forticlient together naively, tailscale does not like it very much heh. Looks like I'll need to study what your image is doing in depth
I don't know about FortiClient specifically, it's a sorry piece of crap that's more often borken than not.
With openfortivpn, you can usually ignore whatever routes you receive and set up your own. I haven't tried the specific set up you talk about, but I don't see why it wouldn't work. However, you would most likely need to NAT on the machine running the Fortinet client.
Sounds like I'll need to learn how to setup custom routes and it's syntax. I have tried to run away from it all my professional life but maybe now I need to.
> However, you would most likely need to NAT on the machine running the Fortinet client.
Could you please elaborate a little more here? NAT from where to where?
Yea on Linux I can run 10 different VPNs (or 10 wg peers) no problem, this limitation of Android is super annoying to me. I think OPs solution is quite a good one for Android users.
Yeah you're exactly on point here, and this limitation exists on both iOS and Android alike. I got very frustrated with switching between VPNs and connections breaking every time I did that.
I feel like my Rust code takes 3x as long to write as my Python code, but the gpt results for rust are about 10x better, because the tooling is a backstop against hallucinations.
I really like the Rust tooling and I like exhaustive pattern matching. Python post-dev debugging time is probably 10x vs Rust. That's why I choose Rust.
There's no way to combine the NVMe drives into a larger sized unit for redundancy / failover though, so not sure what kind of future uptake this could have.
Everyone who uses NVMe-over-network-transport simply does redundancy at the client layer. The networking gear is very robust, and it is easier to optimize the "data plane" path this way (map storage queues <-> network queues) so the actual storage system does less work, which improves cost and density. That also means clients can have their own redundancy solutions that more closely match their requirements, e.g. filesystems can use block devices and implement RAID10 for e.g. virtual machine storage, while userspace applications may use them directly with Reed-Solomon(14,10) and manage the underlying multiple block devices themselves. This all effectively improves density and storage utilization even further.
NVMe-over-network (fabrics w/ RDMA, TCP, ROCEv2) is very popular for doing disaggregated storage/compute, and things like Nvidia Bluefield push the whole thing down into networking cards on the host so you don't even see the "over network" part. You have a diskless server, plug in some Bluefield cards, and it exposes a bunch of NVMe drives to the host, as if they were plugged in physically. That makes it much easier to scale compute and storage separately (and also effectively increases the capacity of the host machine since it no longer is using up bandwidth and CPU on those tasks.)
Yeah. It seems like directly presenting raw disks to the network means any kind of redundancy would need to be done by whatever device/host/thing is mounting the storage.
And doing that over the network (instead of over a local PCIe bus) seems like it'll have some trade-offs. :/
EndeavourOS sway edition (community) [1] has been a pretty great start for preconfigured wayland+sway+waybar and various integration. I added some more stuff to my ~/.config in my own repo [2], but it was a good place to start, and EndeavourOS is basically just rolling Arch Linux with some extra niceties. The included EnvyControl [3] switcher between nvidia to integrated is nice (yea you do have to reboot tho to switch), so I have used the regular i3 config with Xorg for playing some games (nvidia hardware graphics), but use sway for my day to day use (integrated graphics on wayland).
They're not much different, they just run on a different time scale.
Even if your same application continues to work forever, one of the following will happen:
- Complementary software evolves. E.g.: My old photo editor doesn't support new image compression formats
- Alternatives become more attractive. E.g.: I paid for a copy of Sublime Text but now I prefer VSCode because of its additional functionality, my old copy of Photoshop CS2 works fine but the new one will save me time during XYZ workflow compared to the old version.
- The utility of the application is exhausted. E.g.: I already played this single player game 10 times and it's not fun anymore, my copy of Final Cut Pro 6 can't produce 4K HDR movies that my customers demand.
I have versions of Paint Shop Pro from the 90s that can open JPEG, GIF, PNG, TIFF, etc. files.
I have used versions of After Effects of similar vintage. Premier and AE were doing 4K back then because that’s what Hollywood needed for their productions. Illustrator and Photoshop are mostly functional.
Likewise, my Nikon camera from 2011 doesn’t stop working just because it’s old. The tools in my garage are no less effective because home additive manufacturing exists.
If you are stuck on wordpress, consider using a static export plugin like wp2static [1]. I have been playing around[2] with a setup where the normal WP UI is protected behind HTTP Basic Authentication, and wp2static exports the public snapshot
Interesting, can you clarify what configuration in the manifest this would be? Or is it not a standardized pattern and more akin to reverse-engineering on a per app basis?