Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

+1, I use exclusively free tools for this exact reason. I've been using the same tools for 15 years now (GCC + IDE), and they work great

There is a 0% chance that I'm going to subscribe to being able to program, because its actively a terrible idea. You have to be very naïve to think that any of these companies are still going to be around and supporting your tools in 10-20 years time, so if you get proficient with them you're absolutely screwed

I've seen people say that AI agents are great because instead of using git directly, they can ask their AI agent to do it. Which would be fine if it was a free tool, but you're subscribing to the ability to even start and maintain projects

A lot of people are about to learn an extremely blunt lesson about capitalism



A lot of people's problems with Git would go away if they just took a weekend and "read the docs." It's shocking how resistant most people are to the idea of studying to improve their craft.

I've been spending time with my team, just a few hours a week, on training them on foundational things, vs every other team in the company just plodding along, trying to do things the same way they always have, which already wasn't working. It's gotten to where my small team of 4 is getting called in to clean up after these much larger teams fail to deliver. I'm pretty proud of my little junior devs.


This is a reply to an old comment https://news.ycombinator.com/item?id=44452679 (since I cannot reply in the original thread)

> Even assuming python's foreach loop in these cases get optimized down to a very bare for loop, the operations being performed are dominated by the looping logic itself, because the loop body is so simple.

> Each iteration of a for loop performs one index update and one termination comparison. For a simple body that is just an XOR, that's the difference between performing 5 operations (update, exit check, read array, XOR with value, XOR with index) per N elements in the one loop case versus 7 operations (update, exit, read array, XOR with value, then update, exit, XOR with index) in the two loop case. So we're looking at a 29% savings in operations.

> It gets worse if the looping structure does not optimize to a raw, most basic for loop and instead constructs some kind of lazy collection iterator generalized for all kinds of collections it could iterate over.

> The smaller the loop body, the higher the gains from optimizing the looping construct itself.

Let's test your claims

  import random
  import time
  n = int(1e7)
  A = list(range(1,n+1))
  random.shuffle(A)
  print("Removed:", A.pop())

  t = time.time()
  result = 0
  for idx,val in enumerate(A):
    result ^= idx+1
    result ^= val
  result ^= n
  print("1-loop:", time.time() - t)
  print("Missing:", result)

  t = time.time()
  result = 0
  for value in range(1, n + 1):
    result ^= value
  for value in A:
    result ^= value
  print("2-loop:", time.time() - t)
  print("Missing:", result)
A sample run gives:

  Removed: 2878763
  1-loop: 1.4764018058776855
  Missing: 2878763
  2-loop: 1.1730067729949951
  Missing: 2878763
And after swapping the order of the code blocks just to ensure there's nothing strange going on:

  Removed: 3217501
  2-loop: 1.200080156326294
  Missing: 3217501
  1-loop: 1.5053350925445557
  Missing: 3217501
So indeed we have about a 20% speedup, only in the complete opposite direction that you claimed we'd have. Perhaps it's best not to assume when talking about performance.


I think all you have managed to prove is A) Python is absurd, and B) you need to learn about appropriate boundaries and when to drop something.

This conversation was a lifetime ago. You couldn't reply to the original thread for a reason.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: