Hacker Newsnew | past | comments | ask | show | jobs | submit | smush's commentslogin

> Folks should not underestimate the benefits of having a single primary common language everywhere.

Indeed, that's why I'm getting past my static HTML comfort zone and trying out the SAFE template for F# development on both client and server via .NET Core. Super new at it, but excited to learn.

Apparently there is this thing out there called Fable that lets you use the same domain types on both client and server, now THAT sounds like fun to me.


Some do.

Portrait style ones are the BB Key2 or Unihertz Titan

Landscape keyboard is the Fxtec Pro1


This article could also be titled 'The Case for Limiting cautious about software you execute' and with a few tweaks in text could support the below sentence.

"If you go to a bad website, it might cause bad things" is ~=~ "If you use a bad extension, it might cause bad things"

I think that the advice to be wary of extensions adding permissions is quite astute and a good reminder to each of us to make sure we are using well-vetted (to our own satisfaction) software, in-browser and out.

But I'd posit on the whole that browser extensions do much more good than harm. uBlock Origin has stopped many a grandma from clicking a false download button-in-banner-ad.

Add to that the heavyweight tools of NoScript, uMatrix, Privacy Possum/Badger, absolute enable right click, SingleFile, Decentraleyes, and more that I've yet to learn about, and they greatly outweigh the drawbacks of the likes of extensions mentioned in the article and 'web of trust' etc. that have gone to pot.

I react so strongly to this not because I disagree with vetting software, but because I don't want browsers having yet another excuse to yank control and features away from me as a user. I'm already nervous that Mozilla will coyly refuse to support all extensions in their new browser, replace Fennec with a worse system and I will be stuck with it.

Chrome on Android supports no extensions, and is tightening the screws on extensions in desktop, so this is not a slippery slope I'd like to get on.


This is an oversimplification of the problem. Malicious websites run under a different security model than extensions.

Not to say I am for restricting extensions more, but just to vouch for the idea that there is, in fact, a valid and novel point to treating extensions cautiously versus other kinds of software. Even programs running locally and unsandboxed will have to work quite a bit to compromise a browser.


Yes, soon we will be back at: Don't download code from the internet!


> If D&D is sufficiently complicated rules wise that it is actually more fun to have software take care of that so you can just focus on the tactical combat and scripted story choices, then the video game is probably your best bet!

No kidding. I've been writing a WinForms program that is basically a combination character progression tracking, inventory management and basic battle system for Star Wars Fantasy Flight Games.

Why? Do we not want the person-to-person experience? Hardly! We need more wetware space for strategy and thinking of creative ideas, so we need to free up the parts for calculating what number of ability vs proficiency dice and what talents may mess with the roll etc.


TL;DR: Should I keep fussing with PiVPN or try something like TincVPN?

Semi-OT: So I just installed PiVPN to use with this protocol to try and do a small vpn at home (all I want is to go to my domain, auth, and be on my LAN so I can RDP / VNC) and the wireguard bits worked great, and the install process was buttery smooth, even on a Raspbery Pi Zero W.

But - my network lack of knowledge is probably hamstringing me. I opened the WG port on my router and confirmed the dns hostname I'm using corresponds to the public IP, but I'm not able to get the wireguard clients to connect. The tcpdump doesn't show any incoming traffic on the port at all.

Should I keep fussing with PiVPN or try something like TincVPN or Tailscale? I have not been able to get a VNC or RDP session going over tailscale even though all my machines are able to connect to the Tailscale network.

I want to use wireguard, everyone says it is so good, and OpenVPN does seem a bit boring, but ultimately I'm just hitting a wall when it comes to the use case of 'auth, you are on your home lan, connect as if you are at home connected to wifi'


Make sure that the port is correct and it is UDP (not TCP).

(I just did the same setup with PiVPN. Somehow I got a wrong port number first, but then it worked)


OK it defaulted to UDP, got nothing, changed to TCP, got nothing. Will change it back and try again.

I will also double check the port number.


While you're at it, check and double check your port forwarding settings. I got bit by this recently.

My owned router had the right ports opened, but the AT&T bridged router did not. Be sure you open ports on both sets of routers, otherwise your owned router will never have a chance to allow the traffic in the first place.


Have you tried using the tailscale interface IPs (100.x.x.x)? I've been able to use VNC over those addresses.


I have, but I've not been able to VNC over them. I've made sure the FW rules let tailscale do anything and have even tried turning both firewalls off entirely but at least for TightVNC it doesn't like it, nor RDP (RDP enabled in System settings).

But this does give me hope that it can be done in the first place, I just have some setting goofed up somewhere. I will restart my tailscale stuff now that I know someone out there has done it.

Thanks!


All true points, but I'd rather risk being tracked by a company that would be completely cratered even more than they've already been by being bought by Privacy One than just go straight to the devil himself, so I use the !s and just mentally treat it as if its a slightly better form of !g rather than before where startpage was all the benefits of Google search without most of the drawbacks.


Hi - Startpage person here. Just FYI: We don’t collect or share your personal information. The Startpage founders continue to run the company as before and they have control over the privacy components of Startpage. With this investment, we hope to further expand our privacy features & reach new users. You may have already seen some of these new initiatives taking place. 1) Unfiltered News Tab launched in November: https://www.startpage.com/blog/product-updates/launching-unp... 2) Privacy Please! Newsletter launched in last month: https://startpage.com/blog/company-updates/welcome-privacy-p... 3) We're pushing out more info via our blog & social than before, giving greater insight into: How we make money - https://startpage.com/blog/privacy-awareness/advertising-res... How we keep your search private - https://www.startpage.com/blog/privacy-awareness/how-does-st...


Is it OS and SDK bloat? Where do you think most of the cruft is coming from?

Let's say I had a given LOB application written in vb6, c#.net winforms, heck, maybe even WPF. All single-threaded.

If I re-wrote the same application (features, UI, still single-threaded) in native Win32, would that improve the latency?


Who knows? My hunch is there's two main factors influencing this. The first is that constraints breed creativity. If you know you only have 133MHz on a single CPU you squeeze as as possible much out of every cycle, on modern CPUs what's a few thousand cycles between friends?

The second is SDK/framework/etc. bloat, which is probably influenced by the first. With excess cycles you don't care if your tools start to bloat.

I think it's primarily an issue of attitude. If you want to write fast software you'll do it, regardless of the circumstances. It all starts with wanting it.


Think harder.

I worked on a framework in the nineties and did such things as render letters to pixels. Here are some of the optimisations we did then, compared to now:

We used much lower output resolution.

We used integer math instead of floating point, reducing legibility. How paragraphs were wrapped depended on whether we rendered it on this monitor or that, or printed it.

We used prescaled fonts instead of freely scalable fonts for the most important sizes, and font formats that were designed for quick scaling rather than high-quality results. When users bought a new, better monitor they could get worse text appearance, because no longer was there a hand-optimised prescaled font for their most-used font size.

We used fonts with small repertoires. No emoji, often not even € or —, and many users had to make up their minds whether they wanted the ability to type ö or ø long before they started writing.

Those optimisations (and the others ­— that list is far from complete) cost a lot of time for the people who spent time writing code or manually scaling fonts, and led to worse results for the users.

I think you're the kind of person who wouldn't dream of actually using anything other than antialiased text with freely scalable fonts and subpixel interletter space. You just complain that today's frameworks don't provide the old fast code ­that you wouldn't use and think develpers are somehow to blame for not wanting to write that code.


TransUnion does plaintext passwords.

Found that out when I typed in a (example) 25 character password, but at some point the field was truncated down and I somehow figured out that if I backspace IIRC 4 characters away, my saved password worked.

-_-


This chain of logic does not follow. It is possible that your bank is properly salt+hashing your password - the truncation may have been on the back end before, and is now exposed to you because the field is shorter.


Maybe - but why not indicate clear password length requirements on the password entry screen and/or have the PWE text input HTML form only accept password characters up to that max length?

Additionally, silent trucation and 'maybe we do salt and hash after all' makes no sense IMO. That's not to say that I disagree that this is a possibility, only that the whole point of a hash is that it converts something of arbitrary length to a single length.

Therefore, truncating data that gets inputted into the hash would be computationally wasteful for no benefit, because the hash function will always result in a single length.


That doesn't necessarily mean they store in plaintext. They might just only hash the first 20 characters.


No kidding. Already, I've found some interesting articles to read.

I'm taking a look at https://dwheeler.com/essays/easy-cross-platform-gui.html, which has references to XULRunner etc. which since 2009 have fallen out of favor.

Would you continue to recommend those wanting to invest in (for the 80% of use cases) wxWidgets for FLOSS cross-platform GUI apps? BoaConstructor et. al look interesting.

Thanks for taking the time to look at this comment. If it helps give you some context, I'll throw in that I currently am most familiar with WinForms .NET apps or very small Win32 native applications, and have avoided JS successfully so far.


A lot of that stuff is overtaken by events, but I clearly say that the essay was written in 2009. Nevertheless, if you wanted to see what I wrote in 2009, there it is. It hasn't disappeared from The Ether, there's a disturbingly large amount of information that was written only a few years ago and has totally disappeared. One of the reasons that much information has disappeared is because the website can no longer stay running. If your website is designed to last, then the information is more likely to stay available. Yes, I know it's more complicated than that. But it's a start.


wxWidgets is a poor platform abstraction that just results in the lowest common denominator of UI.


I haven't seen any cross-platform widget APIs that allow you to build a MacOS toolbar, for example―at least among the popular APIs. You mostly can only specify that the titlebar and the toolbar should be merged. Qt can draw something, but it will look like a Qt toolbar, not a Mac one.

So I don't see how WxWidgets is an outlier here.


It's fine for small apps and in-company utilities and isn't hard to use at all if you're a c++ house.


Boa Constructor best RAD IDE ever! why does no one understand this?


Semi OT -> Sometimes I'm baffled at how people don't know about rare things like this, but I wonder how long that can last.

Do you think the 'boxed up in a random lot, IDK what it is, some computer thing' behavior we've seen up to now with many rare electronics, games, computers, etc. will go away over time as more people grow up with internet access and web search?

Let's take something that is equally 'dead' in terms of not being made any more: the Samsung Taylor, a dev windows phone. A few devs got some, but otherwise it is gone to the wind. Just to look at it, it looks like a slate smartphone, pretty innocuous. But if you were to search its model number, you'd immediately know you had something rare and valuable to a certain niche (which I admit to being in).


I own a piece of semi-precious test equipment, a Fluke 9010a base unit along with some pods. A beloved mentor got it when an engineering company discarded it and gave it to me while it was mostly a curio. I used it on a retrocomputing project ten years ago.

Tens years ago when I last used it I was a bit surprised to find the set was worth about as much as "a new laptop." (Apparently people still use the Fluke for troubleshooting arcade machines.)

This week I was surprised again to discover the set is now worth about as much as "a new Mac laptop" (i.e. about twice). And many of the prices I was looking at were for non-working ones. Mine works.

So I think it's very easy to lay something aside with a vague idea it "might" be worth something without realizing it's worth the effort to find out exactly how much. You don't know what you don't know.

(I don't think I'd sell mine due to its sentimental value from who gave it to me and what I've used it for. Though I would probably sell it if it doubles again.)


I'm a bit into hunting for old film cameras and you'd be surprized how many people barely do any checking before putting things on sale (or worse, just throw them away). I wonder how many old 'Nintendos' ended up on the landfill that way.

For instance here's three old Agfa folding cameras:

https://upload.wikimedia.org/wikipedia/commons/8/84/Agfa_Sup...

https://thumbs.worthpoint.com/zoom/images2/1/0311/07/agfa-au...

https://upload.wikimedia.org/wikipedia/commons/f/fe/Agfa_Iso...

One's barely worth anything, one's worth quite a bit, one's extremely rare. Would you know which is which just from looking?


I mean, if by “putting them up for sale” they’re putting the thing on eBay as an auction, that’s not exactly mal-adaptive behaviour. The market will—hopefully!—recognize the value and set the price.


Honestly; that it's hard to tell the difference is a non-trivial reason why they are valuable in the first place; increases rarity.


Well that's the thing with making history. When you're making it, it is often boring, thankless and paved with dead ends. Does anybody want your shitty high school drawings? Only of you end up being the next Picasso. But who would have thought? Then there's artifacts that are worth a lot to the right dozen people, and literally nobody else would give you a dime.


> Do you think the 'boxed up in a random lot, IDK what it is, some computer thing' behavior we've seen up to now with many rare electronics, games, computers, etc. will go away over time as more people grow up with internet access and web search?

It's over for rare cars. A friend lives in Southern Africa and used to buy gems cheaply when the person didn't know what they had and then ship them to Europe or the USA for a good profit. Now the owner can just google it and see immediately it's worth big bucks


Corporates always destroy gears to protect IP, so scarcity for failed gears is definitely skyrocketing as we speak.

Spinning up a mini computer in the basement was a good stupid nerd hobby a decade ago, but I can’t see the same happening with retired AWS or GCP servers, now or in a decade.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: