Hacker News new | past | comments | ask | show | jobs | submit | more glass-'s comments login

This also stores the name of the site in plain-text.


Making one big partition isn't the best idea because OpenBSD defaults the way it does for stability, data integrity and, a big surprise... security reasons[0].

That said, you shouldn't run out of space in the default partitions when building the system (ports are another story).

As far as I know the installer defaults to giving you around 2GB in /usr/src (which is more than enough to hold the source and build everything) and if the disk isn't big enough to do that, it won't partition it (60GB is big enough to get a separate /usr/src so 120GB is surely enough).

This assumes the instructions[1] are followed so everything is put in the right places and all the object files aren't dumped into the partition. Other than that, I don't know what could have went wrong.

[0] http://www.openbsd.org/faq/faq4.html#Partitioning

[1] http://www.openbsd.org/faq/faq5.html#BldUserland


OK, I'll have another go with the installer's 'default' partitioning scheme this time around as I intend to use the mtier binary upgrade packages. Last time I tried the compilation I had something like 75Kb free after the compile completed. It worked though.

Your first reference mentions the idea of leaving a part of the hard drive unformatted to allow for making new versions of partitions if necessary - I might try that as I keep little user data on this machine.


The problem in that thread was caused by building ports, which will fill up /usr. If you're going to build ports, I'd recommend changing the working dirs[0] to a different partition (I use /home).

Because of how the auto-partitioner decides the sizes, a 120GB disk should get the same partition sizes as a 256GB disk (the 256GB will just get more /home space). My 256GB disk has 2G each for /usr /usr/obj and /usr/src, and with that I can build kernel, userland and xenocara with no problems (unless I've filled up /usr by building Firefox from ports).

[0] http://www.openbsd.org/faq/faq15.html#PortsConfig


If you want to build ports on a machine, you should add a large (several dozen GB at least) /usr/ports partition. Keep in mind that monsters like firefox and libreoffice have insane space requirements.

Diverting ports builds to /home is a hack that works around the wrong choice made during install (which will invariably happen when you first start out, that's ok -- be prepared to reinstall with better parameters once you learn more about what you need).


Definitely a learning experience.

I have built Iceape on gNewSense linux on an X60 and I can say that under linux you need at least 20Gb of space and 6 hours on the core-duo with 2Gb RAM.

I always plan on a throw-away install when first playing with an operating system and repeat the install when I know what the 'rules' are. I shall be encrypting my /home just for peace of mind if I leave the laptop on the bus, so that is another thing to research.


mtier employs a couple of OpenBSD developers to do those builds.


> Which language?

Haven't you been paying attention? You're supposed to write your security-critical code in a language that was released 5 months ago and has no major deployments yet.


This view is too cynical even for me. ;)


> "They should be contributing their changes back to OpenSSL versus forking".

I suppose that this attitude comes from people who think the fork happened solely because of Heartbleed, when the real reasons are a little more intricate[0] but the straw that broke the camels back was the discovery that OpenSSL had known about certain bugs and let patches to fix them rot in their bug tracker for years. It's hard to contribute changes back when a project won't take patches.

OpenSSL could have taken LibreSSL's changes if they wanted to, it's all public in the OpenBSD source tree, but they haven't.

[0] http://www.tedunangst.com/flak/post/origins-of-libressl


Great write-up. Thanks for sharing. :-)


This blog post about Wasabi is worth a read (another perspective from someone who worked there): http://www.tedunangst.com/flak/post/technical-debt-and-tacki...


great post and it changed my perspective on what wasabi was. This part makes complete sense:

Working on FogBugz changed my perspective on technical debt. I used to believe, as I suspect many do, that it was strictly a bad thing. You took a shortcut because you’re lazy, and then it comes back later to bite you. Count how many times that Wikipedia article uses the word “lack”. As the originator of the term Ward Cunningham explains, that’s off the mark. Buying something with credit doesn’t automatically imply you’re not going to pay your bills.

Instead of thinking of technical debt as yesterday’s work that I failed to do, I think of it as tomorrow’s feature I can have today. You have to pay interest, but in the mean time you’re shipping a product, have a roof over your head, and are keeping the lights on. A much hipper programmer might say something like “you ain’t gonna need it.”

In one sense, Wasabi was a rather substantial payment on the debt we had accumulated. FogBugz was written in a now dead language. Building a compiler extended the life of the product, though of course now we had to pay for the compiler, too. So maybe it was more like a bridge loan, or refinancing. The financial wellbeing of Fog Creek at the time depended on FogBugz, so a total rewrite would have been a terribly risky investment. Even if it costs more in the long run, spreading those payments out over time gets you a lot of stability.

while I also enjoyed Coding Horrors: Has Joel Spolsky Jumped the Shark?[0], Jeff either missed this point or ignored it on purpose to support the shark-jump theory

[0] http://blog.codinghorror.com/has-joel-spolsky-jumped-the-sha...


Worth it for the concept of technical debt refinancing.


With those categorisations a maybe-probably-theoretical-phase-of-the-mooon vulnerability such as CVE-2010-1633 gets put into the same category ("remote information leak") as Heartbleed. This is why severity levels exist.


But it seems to me the difference you are pointing to is in probability of the adverse possibility being realized rather than severity of the impact if the adverse possibility is realized. In risk management, these are usually rated on separate axes.


That probability assessment is pretty dangerous to rely on. Lots of vulnerabilities look extremely improbable, until the exploit tool starts to circulate. This is an age-old problem in software security: when you try to score things in a risk management rubric, you wind up hair-on-fire over simple vulnerabilities that managers can understand, and wind up not fixing sophisticated vulnerabilities that will be commonplace among attackers soon.

It's also important to remember that there's a window in which this stuff gets fixed, probably just weeks long, anchored to the announcement of the vulnerability. Once that window passes, any update you're going to see about exploitability is basically going to announce "everyone is now owned". That's a failure of vulnerability scoring!


I don't disagree with the idea that probability has a lower value in this type of software security risk assessment than some others; my point was that probability is a different thing than severity, so that it is not necessarily an error that issues with a different probability of being exploited but similar consequences get the same severity rating.

Your point, however, is well made.


> Rust

So instead of using C we should write our security-critical software in a language that was released 3 months ago and has no major deployments.


Yes, agreed. We should choose languages based on their suitability for the job, not how long they've been free to ruin the world.

That said, if you have some sort of external constraint about using a language merely because other people are using it, there's Ada, which is over three decades old and has numerous major deployments.


> Unless all you use is enterprisey crap or play AAA titles from bad companies

Is Photoshop a AAA title or enterprisey crap? How about all other common software that many, normal people use daily that doesn't run under Wine?


That software needs to be replaced with ethical versions.


And instead people are going to use what?


not sure yet, but as soon as there's something I'm making the switch.

Too many extensions are required to try to make firefox into something usable, mainly reverting changes or fiwing broken or missing features: ad blocking, sidebar, download manager, bringing back the add on bar, putting back the ability to disable javascript, session manager, cookie manager ability to take screenshot, mouse gestures, tab manager, …


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: