Hacker Newsnew | past | comments | ask | show | jobs | submit | gilgoomesh's commentslogin

That entire banner is pure copyright infringement.


Swift can also 2-way operate with C++. Its coverage of the C++ language is incomplete but I suspect it might outpace Carbon.


I really struggle to imagine an organisation that shepherds a large and venerable C++ codebase migrating over to Swift.

For all of C++'s faults, it is an extremely stable and vendor-independent language. The kind of organisation that's running on some C++ monolith from 1995 is not going to voluntarily let Apple become a massive business risk in return for marginally nicer DX.

(Yes, Swift is OSS now, but Apple pays the bills and sets the direction, and no one is seriously going to maintain a fork.)


I saw organizations in the past do that with Java and .NET, moving away from C++.

It is perfectly feasable for companies that are full commited into Apple ecosystem.


Maybe not, but an Objective-C++ codebase might be a good candidate.


Do you have projects with huge code bases of Obj-C++ in mind?

I guess, some Mac apps? In that case I think most platform independent "guts" would be in C or C++, and the Obj-C++ part is tied to the frameworks, so the devs would have to rewrite it anyway.


Facebook iOS


I think Apple is largely doing this for themselves.


I need to learn more about that. I know that the Ladybird folks want to use it inside their C++ project.


If you read through the thread on that top story, it wasn't a hallucination. There was really a leaf in the shot.


ChatGPT, do you know where the General left his keys?


It's weird that we use "postmortem" in those cases since the word literally means "after death"; kind of implying something bad happened. I get that most of these postmortems are done after major development ceases, so it kind of is "dead" but still.

Surely a "retrospective" would be a better word for a look back. It even means "look back.


It gets even better. Some companies use "mid-mortems", which are evaluation and reflection processes in the middle of a project...


sounds like an appropriate way to talk about death march projects, tbh


It's hilarious watching $50,000 worth of robots take so long to assemble a couple dollars worth of Lego. It's like peering into the old folks home for robots.


That should tell you why stuff is still hand assembled in Asia instead of by robots in the west.


As a counterexample, I offer a pick-and-place line in action.

https://youtu.be/Ca-SoKzjh4M?t=110

SMT component placement isn't that different to placing bricks. Conventional wisdom is that if you can design a PCB that requires no manual work, its assembly cost is more-or-less location independent. SMT pick and place can hit speeds of 200,000 components per hour [1]. That's about 50 components per second.

[1] https://www.hallmarknameplate.com/smt-process/


Fixturing isn't automated in most places. Sure a gantry style CNC machine can drive screws vertically into your parts to join them, but it requires a human loader to put the two parts onto the fixture in the first place.


The tasks requiring high dexterity like final assembly of the product with displays, keyboards, ribbon cables and cases is still done by humans by hand.


Also why it’s OK to stop worrying about our future robotic (or AI) overlords.


Those are already an issue. AI is a bigger threat to cognitive tasks than to physical ones.

Skynet isn't goanna attack you with Terminators wielding a "phased plasma rifle in the 40W range", but will be auto-rejecting your job application, your health insurance claims, your credit score and brain washing your relatives on social media.


Absolutely, that’s without any doubt.

There’s a difference though. The “cool” Terminator Skynet pursues its own goals, and wasn’t programmed by humans to kill. The “boring” insurance-rejecting Skynet is explicitly programmed to reject insurance claims by other humans, unfortunately.

So still, no need to worry about our AI overlords, worry about people running the AI systems.


> AI is a bigger threat to cognitive tasks than to physical ones.

I don't see how you could possibly think this is true. Physical automation is easier to scale since you only need to solve a single problem instance and then just keep applying it on a bigger scale.


Automation doesn't work where high dexterity and quick adaptability is required. You can much cheaper and quicker to train a human worker to move from sewing a Nike shoe to an Adidas shoe than you can reprogram and retool a robot.

Robots work for highly predictable high speed tasks where dexterity is not an issue, like PCB pick and place.


This reply is so perfect I'm going to memorize it for family and friends.


Give it a decade and we'll probably have robo-builders doing it faster than we can blink…


I would certainly hope the laundry robots come first. Screw Lego robots and self driving cars. Please just take the laundry out of the dryer, fold it all and put it away.


First AI is creating our art and pretty soon it will be playing with our toys for us too


People claim that Lego is expensive, but try buying a robot that builds Lego...


You build the robot out of Lego.


They should have done it with lego mindstorm :-)


Someone has. I can’t find it, but there was a Lego robot that would build a (simple) Lego set.


From Wikipedia on Chinese Water Torture:

> Despite the name, it is not a Chinese invention and it is not traditional anywhere in Asia. Its earliest known version was first documented by Hippolytus de Marsiliis in Bologna (now in Italy) in the late 15th or early 16th century, and it was widely used in Western countries before being popularized by Harry Houdini in the early 20th century.


A pixel is two dimensional, by definition. It is a unit of area. Even in the signal processing "sampling" definition of a pixel, it still has an areal density an is therefore still two-dimensional.

The problem in this article is it incorrectly assumes a pixel to be a length and then makes nonsensical statements. The correct way to interpret "1920 pixels wide" is "the same width as 1920 pixels arranged in a 1920 by 1 row".

In the same way that "square feet" means "feet^2" as "square" acts as a square operator on "feet", in "pixels wide" the word "wide" acts as a square root operator on the area and means "pixels^(-2)" (which doesn't otherwise have a name).


It is neither a unit of length nor area, it is just a count, a pixel - ignoring the CSS pixel - has no inherent length or area. To get from the number of pixels to a length or area, you need the pixel density. 1920 pixel divided by 300 pixel per inch gives you the length of 6.4 inch and it all is dimensionally consistent. The same for 18 mega pixel, with a density of 300 times 300 pixel per square inch you get an image area of 200 square inch. Here pixel per inch times pixel per inch becomes pixel per square inch, not square pixel per square inch.


CSS got it right by making pixels a relative unit. Meters are absolute. You cannot express pixels in meters. Because they are relative units.

If you have a high resolution screen the a CSS pixel is typically be 4 actual display pixels (2x2) instead of just 1. And if you change the zoom level, the amount of display pixels might actually change in fractional ways. The unit only makes sense in relation to what's around it. If you render vector graphics or fonts, pixels are used as relative units. On a high resolution screen it will actually use those extra display pixels.

If you want to show something that's exactly 5cm on a laptop or phone screen, you need to know the dimensions of the screen and figure out how many pixels you need per cm to scale things correctly. Css has some absolute units but they only work as expected for print media typically.


> The correct way to interpret "1920 pixels wide" is "the same width as 1920 pixels arranged in a 1920 by 1 row".

But to be contrarian, the digital camera world always markets how many megapixels a camera has. So in essense, there are situations where pixels are assumed to be an area, rather than a single row of X pixels wide.


The digital camera world also advertises the sensor size. So a 24MP APS-C camera has smaller pixels than a 24MP Full-frame camera, for example.


> in "pixels wide" the word "wide" acts as a square root operator on the area and means "pixels^(-2)"

Did you meant "pixels^(1/2)"? I'm not sure what kind of units pixels^(-2) would be.


pixel^(-2) is "per squared pixel". Analogously, 1 pascal = 1 newton / 1 metre^2. (Pressure is force per squared length.)


Same as if you were building a sidewalk and you wanted to figure out its dimensions, you’d base it off the size of the pavers. Because half pavers are a pain and there are no half pixels.


> A pixel is two dimensional, by definition.

A pixel is a point sample by definition.


An odd definition. A pixel is a physical object, a picture element in a display or sensor. The value of a pixel at a given time is a sample, but the sample isn't the pixel.


Definitions of technical things you aren’t familiar with tend to be odd.

You are referring to a physical piece of a display panel. A representation of an image in software is a different thing. Hardware and software transforms the dsp signal of an image into voltages to drive the physical pixel. That process takes into account physical characteristics like dimensions.

Oh btw physical pixels aren’t even square and each RGB channel is a separate size and shape.


These don't appear to be intended as watermarks. They're merely a valid use of non-breaking space for tightly coupled elements like "2.5 billion" and "Title I".

Sure, a human author would almost never do that, but they could. I could imagine a Markdown syntax that did that – it could be done similar to how `code` is marked up in most blogs.


Snow Leopard was macOS moving so slowly people thought Apple were abandoning the Mac.

Apple changed how they tied OS updates to hardware sales in this era and this left a lot of Macs on Snow Leopard for half a decade. So people remember that last point update – which was as close to a low-term-stability release as Apple has ever had.

But to get there, Snow Leopard received 15 updates over 2 years and it was really just point updates to Leopard so it was more like 29 updates over 4 years without a major user facing feature. And this was after Leopard itself took over 2 years to develop.

If Apple did nothing but polish features and reduce bugs for 6 years, people would proclaim them dead. And they might actually be dead since their entire sales model is tied to cycles of development, promotion and delivery. For those of us who remember Apple getting stuck on System 7 between 1990 and 1997 and how the company nearly collapsed in that era: it would be a delay almost on that scale.


It didn’t have anything to do with Sarbanes-Oxley (that was iPhone/iPod touch updates), Apple just charged for OS updates back then.

Snow Leopard was notably cheaper than Leopard ($30 vs $130), Lion was $30 on the App Store, Mountain Lion was $20, then Mavericks and everything after have been free.

Snow Leopard did have a long life though, it was the last OS that could run PowerPC apps, also the last to run on the original 32-bit Core Duo Intel Macs.


Snow Leopard introduced GCD, which was a HUGE new feature. It completely changed how we wrote async code. It just wasn't a huge user facing feature.

Snow Leopard also introduced the Mac App Store (in a point release), which was a user facing feature.

I think the "zero new features" mostly meant "no flashy user facing features". It had a lot of new features for developers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: