Hacker Newsnew | past | comments | ask | show | jobs | submit | arzig's commentslogin

So you’re saying the solution since Republicans control congress they should do away with the filibuster, pass a check funding bill and then continue governing under bare majorities with no filibuster?

I don’t particularly like that outcome and I think the public understands that democrats have at least some leverage or they wouldn’t be acting in this way.


The way this normally works is, when the House has passed something and the Senate won't, it goes to a reconciliation committee (with members from both houses), and they bring a compromise bill back, and both houses vote on it.

OK, the House passed something. The Senate has made it very clear that they won't pass it. Well, who's preventing the next step from happening? Speaker Johnson, that's who.


That's it, you've got it. Majority vote, take responsibility for both benefits and problems. Do away with the fig leaf of bipartisan consent.

Typically it does require bipartisan consent as the house and the senate are often not controlled by the same party.

The thing to note with the filibuster is that it's only in the senate because the house got rid of theirs a long time ago.


Sure, there are sessions where negotiation must happen between House and Senate. I personally think we should abolish the filibuster - it's undemocratic in the extreme. Republicans (only!) still have the "Hastert Rule" for the House, which is equally undemocratic, and named after a convicted sex offender.

I'm urging the current Senate, controlled by Republicans, the party also controlling House and Presidency, to get rid of the filibuster because then the ruling party could not escape responsibility for their legislative actions.


Texas does this. They have lots of deals with data centers to consume additional base load where they basically get load shed first in the summer. Presumably these crypto miners are happy with the arrangement or they wouldn’t have entered into it.


What happens when you add 200+1 in a situation where the compiler cannot statically prove that this is 201?


Your example also gets evaluated at comptime. For more complex cases I wouldn't be able to tell you, I'm not the compiler :) For example, this get's checked:

  let ageFails = (200 + 2).Age
  Error: 202 can't be converted to Age
If it cannot statically prove it at comptime, it will crash at runtime during the type conversion operation, e.g.:

  import std/strutils

  stdout.write("What's your age: ")
  let age = stdin.readLine().parseInt().Age
Then, when you run it:

  $ nim r main.nim
  What's your age: 999
  Error: unhandled exception: value out of range: 999 notin 0 .. 200 [RangeDefect]


Exactly this. Fails at runtime. Consider rather a different example: say the programmer thought the age were constrained to 110 years. Now, as soon as a person is aged 111, the program crashes. Stupid mistake by a programmer assumption turns into a program crash.

Why would you want this?

I mean, we've recently discussed on HN how most sorting algorithms have a bug for using ints to index into arrays when they should be using (at least) size_t. Yet, for most cases, it's ok, because you only hit the limit rarely. Why would you want to further constrain the field, would it not just be the source of additional bugs?


Once the program is operating outside of the bounds of the programmers assumption, it’s in an undefined state that may cause a crash to happen at a later point of time in a totally different place.

Making the crash happen at the same time and space as the error means you don’t have to trace a later crash back to the root cause.

This makes your system much easier to debug at the expense of causing some crashes that other systems might not have. A worthy trade off in the right context.


Out of bounds exception is ok to crash the program. User input error is not ok to crash the program.

I could go into many more examples but I hope I am understood. I think these hard-coded definition of ranges at compile time are causes of far more issues than they solve.

Let's take a completely different example: size of a field in a database for a surname. How much is enough? Turns out 128 varchars is not enough, so now they've set it to 2048 (not a project I work(ed) on, but am familiar with). Guess what? Not in our data set, but theoretically, even that is not enough.


> Out of bounds exception is ok to crash the program. User input error is not ok to crash the program.

So you validate user input, we've known how to do that for decades. This is a non-issue. You won't crash the program if you require temperatures to be between 0 and 1000 K and a user puts in 1001, you'll reject the user input.

If that user input crashes your program, you're not a very good programmer, or it's a very early prototype.


I think, if I am following things correctly, you will find that there's a limit to the "validate user input" argument - especially when you think of scenarios where multiple pieces of user input are gathered together and then have mathematical operations applied to them.

eg. If the constraint is 0..200, and the user inputs one value that is being multiplied by our constant, it's trivial to ensure the user input is less than the range maximum divided by our constant.

However, if we are having to multiply by a second, third... and so on.. piece of user input, we get to the position where we have to divide our currently held value by a piece of user input, check that the next piece of user input isn't higher, and then work from there (this assumes that the division hasn't caused an exception, which we will need to ensure doesn't happen.. eg if we have a divide by zero going on)


I mean, yeah. If you do bad math you'll get bad results and potentially crashes. I was responding to someone who was nonsensically ignoring that we validate user input rather than blindly putting it into a variable. Your comment seems like a non sequitur in this discussion. It's not like the risk you describe is unique to range constrained integer types, which is what was being discussed. It can happen with i32 and i64, too, if you write bad code.


Hmm, I was really pointing at the fact that once you get past a couple of pieces of user input, all the validation in the world isn't going to save you from the range constraints.

Assuming you want a good faith conversation, then the idea that there's bad math involved seems a bit ludicrous


I believe that the solution here is to make crashes "safe" eg with a supervisor process that should either never crash or be resumed quickly and child processes that handle operations like user inputs.

This together with the fact that the main benefit of range types is on the consumption side (ie knowing that a PositiveInt is not 0) and it is doable to use try-catch or an equivalent operation at creation time


For some reason your reply (which I think is quite good) makes me think of the adage "Be liberal in what you accept, and conservative in what you send" (Postels law).

Speaking as someone that's drunk the Go kool aid - the (general) advice is not to panic when it's a user input problem, only when it's a programmers problem (which I think is a restatement of your post)


Happens with DB constraints all the time, user gets an error and at least his session, if not whole process, crashes. And yes that too is considered bad code that needs fixing.


> Stupid mistake by a programmer assumption turns into a program crash.

I guess you can just catch the exception in Ada? In Rust you might instead manually check the age validity and return Err if it's out of range. Then you need to handle the Err. It's the same thing in the end.

> Why would you want to further constrain the field

You would only do that if it's a hard requirement (this is the problem with contrived examples, they make no sense). And in that case you would also have to implement some checks in Rust.


Also, I would be very interested to learn the case for hard requirement for a range.

In almost all the cases I have seen it eventually breaks out of confinement. So, it has to be handled sensibly. And, again, in my experience, if it's built into constraints, it invarianly is not handled properly.


Consider the size of the time step in a numerical integrator of some chemical reaction equation, if it gets too big the prediction will be wrong and your chemical plant could explode.

So too big times steps cannot be used, but constant sized steps is wasteful. Seems good to know the integrator can never quietly be wrong, even if you have to pay the price that tge integrator could crash.


Exactly, but how do you catch the exception? One exception catch to catch them all, or do you have to distinguish the types?

And yes... error handle on the input and you'd be fine. How would you write code that is cognizant enough to catch outofrange for every +1 done on the field? Seriously, the production code then devolves into copying the value into something else, where operations don't cause unexpected exceptions. Which is a workaround for a silly restriction that should not reside in runtime level.


> Why would you want this?

Logic errors should be visible so they can be fixed?


Honestly the only use I’ve found for AI so far is for executing refactorings that are mechanical but don’t fit nicely into the rename/move or multi-cursor mode.

I’ll do it once or twice, tell the llm to do it and reference the changes I made and it’s usually passable. It’s not fit for anything more imo.


There’s a non trivial chance this interacts with credit card processing. There is also app the legal liability of you tell someone meet are cancelled and continue charging them. So probably so not something you trust an intern to do.


This is stuff that companies already handle with their current cancellation pipelines. Hooking up a short circuit that flags whatever user in their DB as having cancelled is something that I would absolutely toss a junior engineer at and expect them to finish in three or so working days, maybe slightly longer.

The only way it's more onerous than that is if companies have an absolutely shit design under the hood, or they're using malicious compliance to argue that this feature specifically needs eight weeks of planning poker and at least five senior engineers to sign off on each iteration of the design phase.


Is this close enough? https://github.com/lima-vm/lima


Isn’t pop_os shipping ancient components at this point due to their hate brained idea to try and create their own de and pinning their next release to it?


RDR is now working fine on Pop_OS with Proton 10.x.


You use distrobox (https://distrobox.it/) and move on with your life. At work I use multiple versions of Ubuntu seamlessly without messing with VMs on a host fedora box without issue. That includes building things like .deb packages.


I feel like there have been rumblings of an honor Harrington by David Weber movie for decades at this point. It it may just not happen.


Yep, simplified by removing all those silly extraneous ‘u’s


Other way around. The British wanted to differentiate themselves from colonial hillbillies so they tried to assume the appearance of culture by making the words look more French.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: