Hacker Newsnew | past | comments | ask | show | jobs | submit | hiyer's commentslogin

Very poor. Hardly lasts a week with step tracking and heart rate enabled. Like someone else mentioned, go for a Garmin instinct 2 or 3 solar. My 2 Solar lasts more than 30 days on a full charge.

Not unlikely given the crash in Atlassian's share price over the last year or so. When I was in NetApp a long time back, such rumours would often do the rounds when the share price took a dip, with Oracle and IBM usually touted as the likely buyers. NetApp is still doing strong more than a decade later :-).

Amazing. Easily the most learning I've had in 18 minutes (I watched at 1.2x speed) in my life.


I was thinking of using it with Duckdb as well but seems it would be of limited benefit. Parquet objects are in MBs, so they would be streamed directly from S3. With raw parquet objects, it might help with S3 listing if you have a lot of them (shave off a couple of seconds from the query). If you are already on Ducklake, Duckdb will use that for getting the list of relevant objects anyway.


Maybe the OP is thinking of reading/writing to DuckDB native format files. Those require filesystem semantics for writing. Unfortunately, even NFS or SMB are not sufficiently FS-like for DuckDB.

Parquet is static append only, so DuckDB has no problems with those living on S3.


What does DuckDB need that NFS/SMB do not provide?


I was thinking the same - if JSONata was a priority for them, why not choose a language with good support, like JS or Java? OTOH if development language was a priority why not choose a format that is well supported in it?


JSONata is present in AWS Step Functions, it's possible they want portability on-prem and into the cloud.


I ran into 5 and 7 in a Flink app recently - was parsing a timestamp as a number first and then falling back to iso8601 string, which is what it was. The flamegraph showed 10% for the exception handling bit. While fixing that, also found repeated creation of datetimeformatter. Both were not in loops, but both were being done for every event, for 10s of 1000s of events every second.


(Perhaps a good library for timestamp code in data pipelines https://github.com/williame/TimeMillis)


Thanks! I'm using Instant.parse at present and this is supposedly 37x faster. Will definitely give it a try.


And report back please! :)


> documentation survives when it lives next to the code. 15+ years ago, this was pretty much the standard. Every decision - whether major or just a hack to handle a corner-case - used to be recorded in the code itself. Then tools like Jira and Confluence came in and these things moved to undiscoverable nooks and corners of the organization. AI search tools like Glean and Rovo have improved the discoverability, though I'd still prefer things to remain in the code.


Agree!


Rovo dev cli is pretty good though. Though that may just be because it talks to claude or openai in the backend.


I used it for a while a year or so ago when it was in beta and gave 20M free tokens daily


Rovo is backed by the typical LLM providers in general, Atlassian isn't training its own models.


They have enough enterprise customers to pay the bills for years though.


I loved their dry seasoning [1]. Bought some when I visited the bay area several years back and used to use it on everything from toasts to pasta. Sadly, haven't visited US since to be able to pick up some more :-(

1. https://www.amazon.com/Pepper-Plant-Seasoning-11-oz/dp/B01LY...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: