Hacker News new | past | comments | ask | show | jobs | submit | po's comments login

I happened to be walking a few blocks north and one east of this when it happened. I saw dozens of people running up the street in a panic. Office ladies running barefoot with their heels in hand. I asked what had happened and they said there was a giant explosion that could only have been terrorism.

The wikipedia article has a few more photos: https://en.wikipedia.org/wiki/2007_New_York_City_steam_explo...

The aftermath photo gives you a good sense of it: https://www.nbcnews.com/id/wbna20184563


Science needs an intervention similar to what the CRM process (https://en.wikipedia.org/wiki/Crew_resource_management) did to tamp down cowboy pilots flying their planes into the sides of mountains because they wouldn't listen to their copilots who were too timid to speak up.

...on the evening of Dec 28, 1978, they experienced a landing gear abnormality. The captain decided to enter a holding pattern so they could troubleshoot the problem. The captain focused on the landing gear problem for an hour, ignoring repeated hints from the first officer and the flight engineer about their dwindling fuel supply, and only realized the situation when the engines began flaming out. The aircraft crash-landed in a suburb of Portland, Oregon, over six miles (10 km) short of the runway

It has been applied to other fields:

Elements of CRM have been applied in US healthcare since the late 1990s, specifically in infection prevention. For example, the "central line bundle" of best practices recommends using a checklist when inserting a central venous catheter. The observer checking off the checklist is usually lower-ranking than the person inserting the catheter. The observer is encouraged to communicate when elements of the bundle are not executed; for example if a breach in sterility has occurred

Maybe not this system exactly, but a new way of doing science needs to be found.

Journals, scientists, funding sources, universities and research institutions are locked in a game that encourages data hiding, publish or perish incentives, and non-reproducible results.


The current system lies on the market of ideas - ie if you publish rubbish a competitor lab will call you out. ie it's not the same as the two people in an aircraft cabin - in the research world that plane crashing is all part of the market adjustment - weeding out bad pilots/academics.

However it doesn't work all the time for the same reasons that markets don't work all the time - the tendency for people to choose to create cosy cartels to avoid that harsh competition.

In academia this is created around grants either directly ( are you inside the circle? ) or indirectly - the idea obviously won't work as the 'true' cause is X.

Not sure you can fully avoid this - but I'm sure their might be ways to improve it around the edges.


> The current system lies on the market of ideas - ie if you publish rubbish a competitor lab will call you out.

Does not happen in practice. Unless you're driven by spite, fanaticism towards rigorousness, or just hate their guts there is zero incentive to call out someone's work. Note that very little of what is published is obvious nonsense. But a lot has issues like "these energy measurements are ten times lower than what I can get, how on earth did they get that?" Maybe they couldn't or maybe you misunderstood and need to be more careful when replicating? Are you going to spend months verifying that some measurements in a five-year-old paper are implausible or do you have better things to do?


Sure - such direct contradiction is rare - call out was the wrong phrase - that mostly only happens which people try and replicate extraordinary claims.

Much more common is another paper is published which has a different conclusion in the particular area of science which may or may not reference the original paper - ie the wrong stuff get's buried over time by the weight of other's findings.

You could say that part of the problem is correction is often incremental.

In the end the manipulation by Masliah et al came out - science tends to be incremental, rather than all big break-throughs and I'd say any system will struggle to deal with bad faith actors.

In terms of bad faith actors - you have two approaches - look at better ways to detect, and looking at the properties of the system that perhaps creates perverse incentives - but I always think it's a bad idea to focus too much on the bad actors - you risk creating more work for those who operate in good faith.


How is that correction mechanism supposed to work though? Do you mean the peer review process?

Friends in big labs tell me they often find issues with competitor lab papers, not necessarily nefarious but like “ah no they missed thing x here so their conclusion is incorrect”.. but the effect of that is just they discard the paper in question.

In other words: the labs I’m aware of filter papers themselves on the “inbound” path in journal clubs, creating a vetted stream of papers they trust or find interesting for themselves.. but that doesn’t provide any immediate signal to anyone else about the quality of the papers


> How is that correction mechanism supposed to work though? Do you mean the peer review process?

No. I meant somebody else publishes the opposite.

One of the things you learn if you are a world expert in a tiny area ( PhD student ) is that half the papers published in your area are wrong/misleading in someway ( not necessarily knowingly - just they might not know some niche problem with the experimental technique they used ).

I agree peer review is far from perfect, and there is problem in that a paper being wrong is still a paper in your publication stats, but in the end you'd hope the truth will out.

People got all excited about cold fusion - then cold reality set in - I don't think the initial excitement about it was a bad thing - sometimes it takes other people to help you understand how you've fooled yourself.


I expressed the same idea here not too long - the value of any one individual paper is exactly 0.0 - and was downvoted by it, but I believe this is almost the second thing that you learn after you publish, and what seems to confuse the "masses" the most.

You (as a mortal, human being) are not going to be able to extract any knowledge whatsoever from an academic article. They are _only_ of value for (a) the authors, (b) people/entities who have the means to reproduce/validate/disprove the results.

The system fails when people who can't really verify use the results presented. Which happens frequently... (e.g. the news)


I'm in academia, and I think it has many good points.

The number one issue in my mind is competitors labs don't call you out. It's extremely unusual for people to say, publicly, "that research was bad". Only in the event of the most extreme misconduct to people get called out, rather than just shody work.


Yeah I don't think CRM is the correct thing in this case... I just think that there needs to be some new set of incentives put in place such that the culture reinforces the outcomes you want.


There actually are checklists you have to fill out when publishing a paper. You have to certify that you provided all relevant statistics, have not doctored any of your images, have provided all relevant code and data presented in the paper, etc. For every paper I have ever published, every last item on these checklists was enforced rigorously by the journal. Despite this, I routinely see papers from "high-profile" researchers that obviously violate these checklists (e.g.: no data released, a not even a statement explaining why data was withheld), so it seems that they are not universally enforced. (And this includes papers published in the same journals around the same time, so they definitely had to fill out the same checklist as I did.)


Not to mention that scientists spend a crazy amount of time writing grant proposals instead of doing science. Imagine if programmers spent 40% of their time writing documents asking for money to write code. Madness.


Project managers and consultants do actually write those documents/specifications justifying the work before the programmers get to do it.


Indeed. You do need some idea of what you are going to do before being funded.

The tricky bit is that in research, and this a bit like the act of programming, you often discover import stuff in the process of doing - and the more innovative the area - the more likely this is to happen.

Big labs deal with this by having enough money to self-fund prospective work, or support things for extra time - the real problem is that new researchers - who often have the new ideas, are the most constrained.


Kinda making my point :P

If your org does this, that's a problem.


No it's not a problem -- it's necessary.

If you work at a large company, it could consider 1,000's of different new major features or new products. But it only has the budget to pay for 50 per year.

So obviously there's a whole process of presentations, approvals, refinement, prototypes, and whatnot to ensure that only the best ideas actually make it to the stage where a programmer is working on it.

Same thing with a startup, but it's the founders spending months and months trying to convince VC's to invest more, using data and presentations and whatnot.

It's not a problem -- it's the foundation of any organization that spends money and wants to try new things.


How else would it work? The onus needs to be on someone to make sure we are doing worthwhile things. Like anything else in life, you need to prove you deserve the money before you get it. Often that means you need to refine your ideas and pitches to match what the world thinks it needs. Then once you get a track record it lowers your risk profile and money comes more easily.


Sounds sensible, bu the major unasked question it avoids is, was the current funding and organization structure of science in place when the past scientific achievements were achieved.

the impression I get from anecdotes and remarks is that pre-1990s, university departments used to be the major scientific social institution, providing organization where the science was done, with feedback cycle measured in careers. Faculty members would socialize and collaborate or compete with other members. Most of the scientific norms were social, possible because the stakes were low (measured in citations, influence and prestige only).

It is quite unlike current system centered on research groups formed around PIs and their research groups, an machine optimized for gathering temporary funding for non-tenured staff so that they can produce publications and 'network', using all that to gather more funding before the previous runs out. No wonder the social norms like "don't falsify evidence; publish when you have true and correct results; write and publish your true opinions; don't participate in citation laundering circles" can't last. Possibility of failure is much frequent (every grant cycle), environment is highly competitive in a way that you get only few shots at scientific career or you are out.


Imagine if everybody in every software company was an "engineer," including the executives, salespeople, and market researchers. Imagine if they only ever hired people trained as software engineers, and only hired them into software development roles, and staffed every other position in the company from engineering hires who had skill and interest at performing other roles. That's how medical practices, law firms, and some other professions work.

For example -- my wife is an architect, so I'm aware of specific examples here -- there are many architecture firms that have partners whose role consists of bringing in big clients and managing relationships with them. They are never called "sales executives" or "client relationship management specialists." If you meet one at a party, they'll tell you they're an architect.

Apparently it's the same thing with scientific research. When a lab gets big enough, people start to specialize, but they don't get different titles. If you work at an arts nonprofit writing grant applications, they will call you a grant writer, but a scientist is always a scientist or a "researcher" even if all they do is write grant applications.


And Boeing was like that. Before the merged with McDonald Douglas. Before the MAX disaster. Before the failed Starliner.


> Imagine if programmers spent 40% of their time writing documents asking for money to write code.

The daily I'm not taking part anymore at work started today at 9:30 as always, and has currently (11:50) people excusing themselves because they have other meetings...

We need a revolution on exposing bad managers and making sure they lose their jobs. For every kind of manager. But that situation isn't very far from normal.


If this was applied in science we'd be still be flying blind with regards to stomach ulcers because a lot of 'researchers' thought bacteria couldn't live in the stomach (it's obviously a BS reason)

Yes, CRM procedures are very good in some cases and I would definitely apply it in healthcare in stuff like procedures, or the issues mentioned, etc.


Have many other projects put MPC servers (https://modelcontextprotocol.io/introduction) to use since it was announced? I haven't seen very many others using it yet.


Cursor also just got support this week. Overall it’s still early (MCP only came out a couple of months ago) but seeing multiple clients that allow it to be used with non-Anthropic models, and getting good results, makes me bullish for MCP.

My colleague has working on an MCP server that runs Python code in a sandbox (through https://forevervm.com, shameless plug). I’ve been using Goose a lot since it was announced last week for testing and it’s rough in some spots but the results have been good.


My prediction is that any future data that is published will likely show up in the x.com app. I believe they are trying to privatize all government functions and Musk wants X to be the official government super-app similar to China's system.



Just astonishingly corrupt. Good lord.


Right... I think it's going to be way more than just data as well. You can expect all gov functions will start to show up in his app. Driver license applications, tax filing, etc...


So this is how Elon gets his everything app. Jesus Christ, the corruption knows no end. And he might actually pull it off with this administration.


> This led to slower device performance without informing users, which is a removal of expected performance functionality.

As opposed to the device unexpectedly shutting down due to a degraded battery not being able to push enough energy to support the CPU? They didn't remove expected performance, they prevented crashes which are by definition 0 performance. All Li-ion batteries degrade over time. That's not removing a feature...

This whole thing was totally overblown.


Well, they DID remove expected performance by slowing CPU performance, disn't they? People who had bought these iPhones (and not the previous ones) did so also because of the promise of a more powerful CPU, a promise broken by Apple. It is removing a feature (a better CPU) and Apple knew it that's why they did it without informing users.


Just to add, they also got fined by the EU for doing so, so it was ruled to be illegal. Bambu's changes would fall into the same category of altering the product and degrading the experience after its been sold.


Just to let you know that InstaCam360 did the same on their cameras with the smartphone app.

Previously you could directly upload the 360 videos do youtube, now you need to download the film locally on the phone, then host a converted version and only after those loops you are permitted to upload.

Or you can now buy a monthly subscription and get back the feature that was already there before. Quite disappointed with this kind of behavior.


the problem isn't that they've done it.

the problem is that user got no choice. Some might prefer degraded performance, others might prefer to charge their devices more often.

Also seller should have no business touching anything that they've already sold - they do might offer support, but it should be up to user to accept it or not.


It's not a matter of "charging more often". The phone just shut down when the battery was somewhere between 0-40%

Source: had two 6S's in the family. In the cold it could just suddenly shut down mid-call from 60% battery.


Indeed; while I've not had this specific issue with the phones, I do still have a mid-2013 MacBook Air lying around (it's now too old to realistically sell), and the battery on that was so worn by the time I got an M-something to replace it that would go from "fine" to "emergency shutdown" during boot if I forgot to plug it in. And then report something like 20% if I plugged it in and immediately booted it again.


Then the battery percentage is miscalibrated. The solution to that is to recalibrate the battery level, so that the old 40% is the new 0%.


It's not like the battery is actually empty. The phone is still able to run at 40% if it limits CPU power draw. As long as the throttling curve is accurate to the battery quality, it's all upside. A slow device is better than a turned off device. And if you want to keep your phone above 40% charge so it runs faster, go for it.

The root problem was not the throttling, it was the phone's inability to run at expected speed after a couple years.


The root problem is that Apple won't let you replace your battery.


However they applied it to all phones of that model, not just ones with degraded batteries


No, it was dynamic based on voltage. iPhones with worn batteries had higher performance at full battery and swapping the battery with a fresh replacement restored full performance even at low battery percentage. In fact this is how the slowdown was discovered: someone replaced their iPhone battery with a non-genuine replacement and it got noticeably faster.


you are still missing the point.

USER should chose that. not apple.

not all of them shut down, someone might get a battery replacement.

What apple should've do is to introduce a toggle, give a warning in notification. and in case of crash, display it again.


Apple (IMO rationally) chose that people would prefer a working phone, one they can use to call emergecy services, for example, to a phone that just suddenly dies.

After the massive hissy fit the Internet threw (along with lawsuits), they added a switch. Now you can choose to have your phone suddenly die.

But the legend lives on that "Appple slowed down phones permanently!!" - even though the fix for that is a 40€ battery swap that takes 30 minutes in any mall phone repair shop.


Again, let user chose. apple sold a product, it's out of their hands to decide what users do with it.

Maybe i want to use the device in a way that's 100% connected to the charger and repurpose it.

It's not apple's business what I'm doing with it


If you left It hooked up to a charger, their fix would never have affected you. It only slowed down the cpu when the risk of catastrophic shutdown was imminent.

I like a toggle for features like this, but it was a pretty standard user experience / reliability choice imho.


what if i want to do that AFTER fix was applied?

what if you replace battery AFTER the fix was applied? you can't rollback.

again, it's about user's choice. it's not apple's device, but whoever bought it. they shouldn't be even allowed to DECIDE which option is better. user should be able to pick whichever they want to go with.


With a new battery, the throttling goes away. The cpu throttling only kicks in if your battery condition is poor, and then only at lower charge levels where the risk of unplanned power loss is imminent.

I get it, but if you’re going to accept binary blob updates from a manufacturer at all, this one wasn’t bad.

If there was a toggle, Would you really run your phone in “reckless disregard for battery condition” mode?

Because that is what this fixed, a flaw in the firmware where the power management subsystem made incorrect assumptions about the battery condition. All new phones come with this baked in and working properly, so your phone doesn’t randomly die in the middle of calls when your battery gets old.

People pitchforked over this update without understanding what it was designed to do. If your phone has a good battery, it does not throttle the cpu. It just adjusts the power management profiles to reflect battery aging.


Yes this would have been better.

But the way they did it was far from malicious. It only affected users who were actually in danger of an emergency shutdown, during times when the shutdown was imminent. While I don’t want anybody diddling my firmware without giving me a choice, this particular issue was really a nothing burger in the end.

It was discovered when it became apparent that replacing a defective battery made the phone faster. Seems like a standard reliability / user experience fix to me. Not Many people would choose the “don’t adjust system power consumption to prevent unplanned shutdowns when the battery is about to fail” toggle.


It was not overblown. Apple didn't disclose what they were doing or give the user the option to decide what was best for them. When a company chooses to behave that way, it should hurt them, and it did.

Apple's actions in this case were even worse than Bambu's. At least Bambu documented what the update did and offered the option of declining it.


> This whole thing was totally overblown.

No, it isn't. If the battery was broken and they knew the battery was broken, they should have informed the user the phone could be fixed with a new battery. They decided to gimp the device and not tell the user so they would be more likely to purchase a new device rather than simply fixing the old one.


> All Li-ion batteries degrade over time

So they know this yet they refuse to let users swap the battery?


Users can swap the battery?

  1) open phone
  2) remove battery
  3) replace battery
  4) close phone
It just requires more tools than your fingers, like every single mainstream phone.


Not sure what kind of users you're dealing with, but your typical iphone user can absolutely not do that


A typical car driver can't change the oil in their car, nor can they do a headgasket swap either.

People don't go telling that Ford "refuses users to let their change their oil".

It's all perfectly doable, but you do need the tools and an ability to follow a step by step guide with pictures.


Imagine Ford deciding their cars must drive at 50% their speed when the engine oil is older than 2 years and at the same time forbidding users from changing the oil.

Yet there are always people justifying these type of awful practices as better for users. These aren't, the measures are only good for business.


Ford actually does this. They have something called limp mode for when sensors detect degraded conditions. They won't honor the warranty if you clear the code manually and continue operating the vehicle.

Many cars enter limp mode for when the ECU senses a possibly damaging condition. This limits the performance and capabilities until someone with a diagnostic computer can plug it in. Many times these diagnostic computers are entirely proprietary.

I'm not saying it is justified, but to pretend that other businesses don't do this is silly.


Well, that still wouldn't reduce your car speed by 50%.

And even for that case there would be a warning on the console and a mechanic would be able to inform what is happening. On this iphone case, there was no warning at all on the device nor there was any disclosure that they would be doing this to the phones.

You know this. In either case, thank you for the ECU info.


> Well, that still wouldn't reduce your car speed by 50%

It reduces your speed by much more than that. Varies depending on the model, but limp mode often won't get you go past 2nd gear.


> Well, that still wouldn't reduce your car speed by 50%.

It does actually. It limits your top speed, and your engines rev range to approximately half of redline or less. Typically you end up limited to under 45. Also, accessories and other options, like A/C are disabled. The only indication that you will get is the reduced performance and the check engine/service light (sort of how you might get a 'service battery' warning and reduced performance on a phone).

Again, not defending it, but pointing out that Apple hardly invented artificially limiting performance behind opaque warnings to prevent unwanted outcomes. Cars have had limp mode since before the iPhone was invented.


Forbidding them from changing the oil? I personally changed my battery, I did not feel like it was forbidden.

Not even that hard.

For me, the firmware fix helped me limp through the 2 months before I finally got around to replacing the battery.

It made my phone that was flaky and unreliable below 40percent battery into a phone that worked slightly slower once the battery got low, but didn’t just randomly shut off during calls anymore.

I’d have preferred a toggle, but to be honest I doubt I’d have ever used “reckless disregard for remaining battery capacity” mode.


Have you driven a German car ever?

They are SO LOUD if you don't service them at regular intervals. They're even doing fancy tricks to make sure you're not faking the service.


Yes. I live in Germany, drive German cars and know the tech.

Regular service is indeed a bother. You know what I hate the most? In my oldish Mercedes it isn't even possible to change/update the hour without using a proprietary tool only available at official Mercedes mechanics. Since I refuse to pay premium cost for attending their mechanics, the clock on my car is always with wrong time.

And let's not even get into new business models like charging you a subscription to unlock the car to move faster or to unblock the heated seats. Indeed they also have quite "creative" ways to squeeze money and force to get new models.


I suspect this is related to why a string quartet is the right number of musical voices. Two violins, viola, and cello give you a very fulfilling number of separate ideas to track without overwhelming you.


I think you're taking the metaphor about a string quartet as a "conversation among equals" too literally.

In terms of perception, I'm not sure there's much of a relationship to a human conversation. To make things equal, the string players would need to take turns soloing while the others wait more or less silently to respond, each with their own solo response. You'd be bored out of your gourd if string quartets were written that way.

But more to the point, the vast majority of time in a string quartet is devoted to two or more of the players producing phrases of music in parallel, and that is musically coherent and pleasing to the players and audience. Most humans cannot track two humans speaking in parallel at all. That alone tells us that music cognition is a very different phenomenon than speech cognition.

In short, I'm not sure why a string quartet would be considered the optimal genre for humans to produce music together. And even if it is, the reasons why are even less likely to do with the protocols around human speech cognition, and certainly not with some bizarre equivalent of the "theory of mind" associated with the musical phrase produced by one of the instruments[1].

1: Small digression-- In Elliott Carter's 2nd String Quartet he actually started with a concept that each instrument was a kind of "character" in a play among the quartet. In this case, the problem with OP's metaphor becomes obvious even in the introduction-- the homogenous timbre of a string quartet makes it difficult to hear the differences among the characters. (IIRC I think even Carter admitted this.)


I recall that Charles Rosen wrote somewhere that one of the reasons the string quartet took off in the classical period was that it allowed the playing of all the notes in a dominant seventh chord without double stops. Although this was probably a better explanation for the relative paucity of string trios in the output of Mozart (1) and Beethoven (0). The establishment of four parts as the "standard" scoring for vocal ensembles can be traced back to the 15th century.

On the other hand the second and more famous dining (and conversation) club founded by Dr Johnson had originally 9 members, and gradually grew from that to dozens. Although many including Johnson may have not been entirely happy with the expansion.


Counterpoint may leave too much implied with only two voices; with four or more voices one must increasingly break or relax various rules that promote voice independence, e.g. the use of parallel motion where additional voices simply double some other line (they can't all be independent, there's too bleeding many of them!), or to drop voices for a thinner texture, for example where there are five instruments but only three or four of them are sounding together most of the time. That's a long way to say that around three to four voices is ideal if you want independent lines (except they're not really independent, like two people shouting past one another; there's a weird mix of both working together while each yet manages to stand out in good counterpoint) though even better than this claim would be to compare, say, Bach's two-part inventions to works that have more voices.

For those who do not know counterpoint, you have only three motions a voice (a horizontal line of music, traditionally sung) can make relative to another voice (move closer, apart, or to hold steady) combined with limited voice ranges (say, a doubling of frequency, or so) and limited interval choices (seven, or so) within an octave or frequency doubling, and the voices are very close to one another but only rarely cross one another, on top of all that various rules systems that forbid or frown on such things as the tritone, parallel fifths, and so on into the weeds such that with more than a few voices you quickly run out of valid options for all the voices to move independently.


That's also a "classical" rock band - vocals, guitar, bass, drums (e.g. Beatles any many others)


Also the traditional barbershop quartet for acapella.

Interestingly, I like the 5-piece versions of all 3 of these: add a keyboardist to the rock band, a piano or harpsichord to the classical string or woodwind quartet, a female vocalist to the acapella group. Having two leads lets you do much more intricate countermelodies and harmonies.


A string quartet consists of 4 tonally adjacent instruments, and is thus much more like 4 humans talking.

A "classical" rock band consists of 4 utterly different instruments from a tonal perspective, and is thus nothing like 4 humans talking. Same thing for jazz - and its why you can have multiple instruments performing simultaneously and in ways that are not obviously connected to each other.


Most rock bands have more than one guitar.


Bass is a guitar in 99% rock bands.

I have not seen a good band with more than 4 people on stage simultaneously.


"Music for 18 Musicians" by Steve Reich is probably one of the masterworks of the second half of the 20th century.

Any vaguely disco-adjacent band will have more than 4 people on stage because there will be at least keyboards and horns in addition to drums, bass, guitar and vocals. Even a band that simply adds an additional person player percussion to a typical 4 piece exceeds your limit yet can wonderfully enhance the music.

If you haven't seen any of those bands, then that's your loss, but provides no reason to try to generalize about the right size for a live band.


It's anecdotal and subjective but I go see many rock bands and good ones always have 4, 3 or 2 people :shrug:


Rolling Stones, Lynyrd Skynyrd, Aerosmith, Journey, Radiohead, Guns and Roses, Fleetwood Mac, AC/DC, The Eagles


Have not seen them either.

You pick really popular names but in most regular bands when there is more people there is worse link between them...


>> I have not seen a good band with more than 4 people on stage simultaneously

Glenn Branca Orchestra.


what does it have to do with the point

> I have not met a religious programmer

> Donald Knuth


Yes but parent poster was almost certainly talking about lead guitar and rhythm guitar.


Vocals are often a person who is also playing an instrument. So in a 4-person band you can have up to four voices, lead and rhythm guitar (or maybe keyboard), drums, and bass.


AC/DC. Foo Fighters. Guns 'n Roses. Queens of the Stone Age...


well I have not seen them in particular ^^

There are good bands like that but not most.


Let's not forget the synthesists


You've never seen a big band or an orchestra? Sir please.


I saw an orchestra once or twice. We're talking about rock music here.

I've seen bigger bands but they suck in comparison.


Guessing from your response, you might not know of https://en.wikipedia.org/wiki/Big_band, which is fine. But now you do :)


Edit: I thought you linked to yet another famous band. People keep doing that... 99% of bands a normal person would see in normal life don't even have a wikipedia page.

However your link looks about jazz.

A common rock band is rarely good with more than 4 members because people lose unity and it's just technically harder and people are rarely professionally trained.


Honestly I wouldn't know anything about rock bands because I don't listen to it at all so you could be very right. I just responded because this thread was talking about string/barbershop quartets, which are almost definitely 4 parts because that is the minimum number of people required to make a 7th chord, and then again because you didn't know what a Big Band was, which I suppose is very understandable, but as somebody that grew up around and playing on them it's super foreign to me.


Ah. I was replying to "Most rock bands have more than one guitar". I don't listen to barbershop quartets, not even sure what it is


Which could be in the vocalists hands in the gp's example keeping the number at four.


Historically, those who controlled the media controlled the message. If you're the only one with a printing press, you control what people read. Same with radio. Same with TV.

But what happens when everyone can put their message in front of a lot of people? When the playing field is level? When everyone has a printing press, the ones with the best ideas are the ones people listen to. Influence can no longer be owned. It must be earned

Man, Zuckerberg would be rolling over in his grave if he saw what happened in the following years and our current engagement/algorithmically-driven media ecosystem.

I find this book to be a bit sad. I do believe they were trying to do all of this stuff but it definitely went off the rails.


I think Zuckerberg is still alive and well aware of the current landscape.


I think he was replaced by a robot at some point and is now in Cuba with JFK and Tupac


Whether or not Zuckerberg himself is alive (he is), this version of him is long dead. One of the sad realities of life under capitalism is that even if you go into something idealistic, hopeful, kind even, is that once it hits a certain scale the only thing you can do is succumb to some kind of greed.

Even if you yourself try to resist, you'll be beat down by market forces until eventually you either give it all up and let someone else do the evil, or discard the idealism and say "That's just the way it is!" and look back on your idealism with shame and grief.

Makes me sad too


I wouldn't be so hasty in giving out such judgements, especially when Zuck is the only one among all big tech magnates to push for Open Source AI and LLMs.


Whether or not Zuckerberg himself is alive (he is), this version of him is long dead.

Yes, that was the joke that apparently wasn't obvious enough. :-)


Plain text accounting is cool but I think one of the biggest barriers for people is downloading bank data into a standard format.

The banks are never going to embrace much more than CSV or excel files... the various data aggregation platforms (yodlee, plaid, etc...) are not open source or hobbyist friendly.

Back in ancient times there was a company called Wesabe (https://en.wikipedia.org/wiki/Wesabe) that wrote software that did bank syncing on your desktop. Mint.com basically put them out of business but I still think about that approach. I think it could work for open source.

Has anyone else?


hledger has tooling to transform fairly arbitrary CSV into transactions it understands[0]. I haven't tried it yet, but after spending 4 hours over multiple days helping my SaaS bookkeeping company troubleshoot their bank connection problems[1], you can better believe I'm willing to put a little time in trying this out.

Every damn time I reconcile transactions I end up fighting their system that I can't see the workings of or fix myself. It's getting to the point where the juice isn't worth the squeeze. I'd sooner deal with the CSV myself given a half-way decent set of tools to do so.

[0] https://hledger.org/1.40/hledger.html#csv

[1] No, they weren't willing to refund me any money; this is "typical", in their words.


This is the big advantage of hledger. It has two ways of translating csv into journal form - one simple and one more complicated, but very flexible.

I find it best to have a separate journal for each downloaded account. I just include them into a master journal (along with a manual entry journal) and generate reports from that.

I also use git so I can roll back the latest import, if something goes wrong - but that hasn’t happened yet.


I recently discovered Paisa, which is basically a nice UI over ledger-cli. Import is very convenient. You upload csv (or similar), see the preview, then write a script which converts each row into ledger text format. There is linting and everything. When you like the result, just save the script so you can import again anytime. It also supports downloading commodity prices if you use it to track stocks and similar.

Charts are not generic enough for my taste so I'm exporting data elsewhere, but for data entry it is great.


Yes, charts are not generic enough for me too.

I like Paisa's ledger file editor, though other parts not working in my country, I'm fetching prices with a python script.

I wish its editor seperated into a library to be included in personal projects.


yeah I looked at that too... and that aspect is useful.

However, I'm not really looking for a layer on top of ledger as much as I'm looking for a configurable web-scraping system (using the local password manager) that can be run to get the csv/pdf/etc.. files needed to create the ledger.


I have run into issues where CSVs are not correct/aligned with PDF statement data that banks are legally obligated to provide. In addition CSVs almost never have balance data. So I download the PDFs and extract data from them. This is much more painful than it needs to be- providing a sensible machine readable PDF involves just following a few simple rules to ensure the 3 or 4 transaction fields, and the few needed statement fields (dates and balances) are extractable without fragile heuristics. There is no conflict between branded and human readable vs machine readable.


The UK has Open Banking, a standardised API for accessing banking data:

https://www.openbanking.org.uk/


In my country any transactions are reported using sms. So implemented a system using Tasker to catch these sms and store it in a CSV file. This CSV file is put inside a folder which is synced using SyncThing to my desktop.

I had plan to process these data and add to an accounting system, but didn't get a chance, and then my mobile crashed and I lost the Tasker action. Now I'm not getting any motivation to implement it again


Following the St. Petersburg attack, the Federal Security Service (FSB), in an event that may ring somewhat familiar to many in the United States and Europe, asked Telegram for encryption keys to decode the dead attacker’s messages. Telegram said it couldn’t give the keys over because it didn’t have them. In response, Russia’s internet and media regulator said the company wasn’t complying with legal requirements. The court-ordered ban on accessing Telegram from within Russia followed shortly thereafter. Telegram did, though, enact a privacy policy in August 2018 where it could hand over terror suspects’ user information (though not encryption keys to their messages) if given a court order.

...

... Pavel Durov, Telegram’s founder, called on Russian authorities on June 4 to lift the ban. He cited ongoing Telegram efforts to significantly improve the removal of extremist propaganda from the platform in ways that don’t violate privacy, such as setting a precedent of handing encryption keys to the FSB.

https://www.atlanticcouncil.org/blogs/new-atlanticist/whats-...


This doesn't make any sense. Either the author of the article is confused, lying, or is drawing conclusions from source material that is untrue.

In the US case, there was a phone where data was encrypted at rest. Though Apple was capable of creating and signing a firmware update that would have made it easier for the FBI to brute force the password, Apple refused to do so.

In the Russian case, the FSB must have already had access to the suspect's phone because if it did not then Telegram would not be in any position to help at all.

So, the FSB must have already had access. And therefore, by having access to the phone they also had complete access to the suspect's chats in plaintext, regardless of whether or not the suspect used Telegram's private chat. There would have been no keys to ask Telegram for copies of.

Alternatively, the FSB might have had access to some other user's chats with the suspect, and wanted Telegram to turn over the suspect's full data. Telegram is 100% able to do that if they want to.

As the specific part of the article you have quoted is definitely bullshit, I suspect the rest of it is bullshit too and that despite what Roskomnadzor states in public, the real fight with Durov was over censorship.


This one coins many new phrases that will stick in my mind. Phrases like "Big teddy bears" are actually useful in that they roll up a subtle or complex idea into a pattern that we can then talk about. Similar thing happened with things like "Gatcha mechanics" in mobile games for me.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: