Hacker News new | past | comments | ask | show | jobs | submit | more Handy-Man's comments login

Who cares about how many hours of work they are putting in. As long as tasks are getting done on time, it shouldn't matter what I am doing with my "hours".


I have often seen this comparison here, but this article aims to show that it’s a flawed and incorrect argument.



Title editorialized due to being too long


That's just the enterprise guarantee. The same applies to Azure OpenAI services and the API services provided by OpenAI directly.


local


For now.


> > local

> For now.

Like everyone's music and movie collections.


Context of the iTunes hard drive scandal, for those who don't get the reference:

https://web.archive.org/web/20160506100227/https://blog.vell...


Copilot is Microsoft's branding for all things AI for their products.


Except when it is actually Copilot, in which case they call it "Github Copilot."


While he does say, he is leaving for some personal and meaningful project, let’s see what it ends up being.


That "personal and meaningful" can just mean anything.


More like hustle culture’s “spend more time with the family”


Spend more time with my side projects.


+1, it's amazing


`Kafka server with tons of stock-trading info`

Can you talk more about this? I personally have a Dell R720 and about 90TB available for use, and use 40TB of it for my media server. I'm wondering if that's some use case I want to look into.


DISCLAIMER: I don't really know what I'm doing. My knowledge of trading basically amounts like a few Coursera classes.

I wanted to play with paper trading strategies. I listen on websocket streams of cryptocurrency and stock ticker stuff [1], and the service listening on that socket just plops it into Kafka, with the partition key being the ticker name.

The reason I bother adding Kafka largely comes down to the ability to use the Kafka Streams API. Kafka Streams allows me to do time windows of different trades, or lets me do a sql-style join across two different streams, or lets me filter out data that I don't think is relevant.

Kafka Streams gives you most of the fun of a map-reduce framework, but without having to administer a map-reduce server; it's even smart enough to handle internal state by creating intermediate topics and/or creating local RocksDB instances. It's pretty cool.

There's also has the advantage of allowing me to set retention, so I don't have to worry about doing any kind of manual cleanup; I only have to worry about having N days of stuff on the server, and it's trivial to change that.

Also, I have a number of topics, each with 32 partitions, meaning that if I do find any kind of strategy that works, I can very easily scale it up without changing any code.

I'm basically using Kafka as a streaming-only database. I think it's neat but I haven't actually found any strategies that make or lose money. It's just been a fun way to play around with different bits of server stuff.

[1] https://docs.alpaca.markets/docs/real-time-stock-pricing-dat...


Awesome! Yeah, seems like a great learning opportunity. I did 2 Coursera courses during Covid of trading as well, but didn't touch it after creating one strategy, mostly using RSI, Moving Avg etc lol. Will give this a try, thanks.


Yeah, I had no delusions I would become a billionaire from anything I did. I just find that I learn stuff better if I have a direct goal instead of just dealing in abstracts.

I can read about all the fun theory behind trading math and for distributed systems, and that has some value, but I will understand stuff a lot better if I give myself a real project to do stuff with. I guess I am more of an engineer than a mathematician.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: