> To our great surprise, we've found that our codebases have a lot more declarations than assignments, so it makes sense to require the extra keyword on assignments because they're rarer.
I actually love this statistical approach to language design, draws parallels to something I read that CPU designers make optimisations to the innerworkings their instructionset based on heuristics such as average number of functions arguments etc.
The argument isn’t that they don’t exist or are not published, it’s that they are not published widely.
Given that the circulation for Linuxformat was just 19,000 in 2014, it seems that that number backs up the position that you wouldn’t find this at a street level vendor.
And what do you suppose is the ratio between train stations + shopping malls vs actual street level tobacco and magazine stands/shops is in most European countries? 1 to 700? 1 to 1000?
Any way, the whole argument is redundant. Go head to your nearest newsagent and see if they stock it. If you live on top of a train station or a supermarket then you’re due both congratulations and commiserations.
For everyone else… you can’t buy them at your nearest kiosk.
Street level magazine kiosks sell magazines with up to date Linux kernel CD roms? That’s a lie. I’ve literally never seen that anywhere in Europe in the last decade. You might be able to find them in specialist shops or larger supermarkets, as you can in the UK, but street level kiosks? Get real.
I'm in france, I regularly buy one of those when I take the train lol.
We have quite a bit of choice and they all have recent-ish issues: https://www.journaux.fr/linux_informatique_1_0_130.html and you can find at least a couple of them in most kiosks ; at least Linux Identity always comes with a physical disk
And you don't have any friends & family that regularly come for help ?
When I built my current PC, I didn't bother with a 3.5" floppy drive, but I still had to get an USB one a couple of years ago when an acquaintance showed up needing to read some files from them...
In certain environments i have extensively worked in, the machines on which one builds are only allowed internet access on an ip:port basis after months long process involving dozens of people in multiple teams.
Many people download once, use constantly and on many machines.
My dev PC was never online since it was put together, all patching and updating was performed offline. All builds were bit-to-bit reproducible.
This is it right here, a similar analogy this brought up was car manufacturers of old making an excuse that fatalities were the idiot drivers fault and not because of the inherent lack of safety features of the vehicle. It's too damn easy to make the wrong assumption in C & C++ and spend a painful amount of time debugging.
In my experience the added safety guarantees have made me more comfortable in "going faster" and back to vehicular terms the choice is still there to not fasten the seatbelt and go `unsafe`.
A lot, the iPad is not a computer period. It's great hardware (that has the same CPU as the MBP) with extremely limiting software, low hanging examples of limitation would be the how iPads fail to utilise any monitor properly, inability to natively open up a terminal instance and its weak filesystem. You would think given its power it would be a perfect platform to develop native applications but no, in order to get any work done you would need to remote into an actual computer which defeats the point.
The experience is quite nice though. A lot of people already develop on remote machines, and switching to a “dumb client” with an amazing screen, huge processing power, touch and writing abilities, plus crazy battery life is not a bad deal.
There's a weird and persistent-over-years blindness by a really high percentage of HN posters to how amazing iPads and even iPhones are as tools for creative work of practically every kind that isn't programming or extremely mouse-centric. Meanwhile they're way more useful than a laptop would be for damn near every creative task I partake in that's not programming. The combination of form factor and sensor suite is pretty amazing, and very useful. If I were only allowed one computing device in my house, no question I'd choose an iPad of some sort. Probably the big Pro. It'd hurt more not to also have a phone (for the tiny size & portability) than it would to not also have a laptop. I can always SSH somewhere else to run my software. I can't access the extremely useful capabilities of an i-device without having it present with me.
I own an iPad, I'm not unfamiliar with it's capabilities. It's just not that compelling of a package, hardware and software wise. If I'm going to write code, I could either punish myself and reach for the iPad, or I could go grab a laptop and get work done. I'm not denying the fact that you can SSH into a server with it and use special bindings for your Magic Keyboard but... why would you? Professionals want robust tools, plain and simple. The iPad is a wading pool for a lot of activities; you can get a little photo editing done, you can do some video editing in a pinch, you can futz around in Garageband with pre-sampled instruments and a handful of sampled synthesizers, but once again: why? They're neat timewasters, party tricks in an 11" form factor. It's not held back by a lack of power, it's held back by Apple's refusal to give it proper tooling. If there was a proper DAW on the iPad, I might be agreeing with you. If it didn't tone-down the file management to somehow be worse than a Chromebook, there might be some kind of professional utility there. But they don't, and that's why HN posters simply don't care about your iPad. It's a Disney-fied whirlwind tour through creative professions and bare-minimum MVPs. Oh, and there's TikTok and YouTube apps for when you inevitably give up and resign yourself to consuming entertainment. I only reach for mine when I need a third screen for YouTube videos, and even then I often just balk at the screen ghosting and toss it back in the drawer.
I’m an artist and musician and my iPad Pro is pretty much useless because it can’t run any real software programs. It can only run what are essentially mobile apps. It’s pretty pathetic considering it’s the price of a new laptop.
Have you tried apps like Procreate / Linea Sketch? They are really, really good and you can't match the touch + stylus experience on a desktop computer. These are "real software" by all definitions.
I imagine you want to run a windowed system on it ("real apps"), and are just not used to touch UIs. It's just a different paradigm.
Containerisation software adds so much more than just a binary with layers of indirection.
Easy fine grain control of individual application memory, namespace isolation (container to container communication on a need to know basis), A-B testing, ease of on-boarding, multi-os development, CD pipelines, ease of container restarting etc.
I echo this sentiment entirely.. mutli dev-environment, multi-machine / os things really just work. Docker et al also really shines with onboarding, the new recruit can literally get up and going in minutes.
I would not want to go back to the old ways of doing things.
Would a better design not to bore downwards instead of build upwards? This seems like such an over-engineered concept that could be simplified and compartmentalised drastically.
There's a whole host of negatives I see in this implementation, not to mention the downsides of erecting a crane-line structure and public opinion.
This is a UK company doing what I described above: https://gravitricity.com/ (i have no affiliation)
Drilling is also complex and expensive. You'd have to go down, reinforce it and deal with pumping at the very least. On the other hand a crane is, well, a pretty small and simple (/well known) thing. It's a crane and a stack of bricks, it doesn't feel particularly over engineered to me.
Either may make sense given the surroundings but I don't think building up is an obvious problem
From a personnel safety standpoint (what a catastrophic failure looks like in each case),
a unit installation cost standpoint (fabricate x amounts of bricks with x tolerance in the grooves plus a crane with y units of generation on top) and finally from
a complexity standpoint of stacking stacking bricks (I assume is not just going to be a stored procedure and include some kind of error correction and monitoring).
I just cant see anything positive about this solution in the slightest. I wouldn't want to be anywhere near that structure. A crane truly is a simple and well known thing but I have tangible knowledge of a large community pushback in the UK where all that was installed was a relatively small turbine on the top of a hill.
Boring and reinforcement (concrete spraying) is a tried and true technology and most importantly its out of sight.
Another commenter echo'd this here but it's the same for me. I find running WSL2 on windows easier to get up and running and way more stable than running Linux and macOS which just 5 years ago would be the anectote of a crazy person.
Things really just work, I get a stable desktop environment, not the prettiest compared to macOS but at least the window + virtual desktop management is leagues better, and an even better development environment without dual booting.
Curious, I went the opposite direction: I used Windows for decades, and WSL since I first heard of it. When I switched to Kubuntu i felt like I was finally home. I always heard bad things about KDE but the thing that I found there might be the best desktop environment at the moment. I like it better than OSX even.
Ofc there are some annoying things on Linux machines (e.g. having to research more before buying hardware), but Windows isn't without it's annoyances as well (e.g. bloatware like MS Edge that will try to get your attention once in a while, some things that you can easily achieve on a linux machine are cumbersome or impossible to solve, and the ways you do it change much faster than on Linux etc)
I use both. But for dev stuff I prefer Linux by a lot.
This sounds like a cool project.
Considering its a journalling app and I'm assuming the user just enters plain text, have you experimented with compression and decompression before and after the SQLite storage?
It's actually a flash card app, not a journaling one, so the text entered is quite short. I haven't played around with the compression idea, but it sounds like it could be useful for large text entries.
I actually love this statistical approach to language design, draws parallels to something I read that CPU designers make optimisations to the innerworkings their instructionset based on heuristics such as average number of functions arguments etc.