I agree that STEP files are not parametric. But for cases where thats all you've got, Altair Inspire is pretty good at letting you use them in a CAD system:
It's to a "test balloon" if you have a plan to mandate it and will be announcing that. Unless I suppose enough backlash will cause you to cancel the plan.
It's literally a test of how people will react, so yes, finding out if people will react negatively would be exactly the point of doing the test in the first place. Would you prefer that they don't publicize what their follow-up plans would be to try to make it harder to criticize the plans? If you're against the plan, I'm pretty sure that's the exact type of feedback they're looking for, so it would make more sense to tell them that directly if it actually affects you rather than making a passive-aggressive comment they'll likely never read on an unrelated forum.
If they’re running the project with a Linus-type approach, they won’t consider backlash to be interesting or relevant, unless it is accompanied by specific statements of impact. Generic examples for any language to explain why:
> How dare you! I’m going to boycott git!!
Self-identified as irrelevant (objector will not be using git); no reply necessary, expect a permaban.
> I don’t want to install language X to build and run git.
Most users do not build git from source. Since no case is made why this is relevant beyond personal preference, it will likely be ignored.
> Adopting language X might inhibit community participation.
This argument has almost certainly already been considered. Without a specific reason beyond the possibility, such unsupported objections will not lead to new considerations, especially if raised by someone who is not a regular contributor.
> Language X isn’t fully-featured on platform Y.
Response will depend on whether the Git project decides to support platform Y or not, whether the missing features are likely to affect Git uses, etc. Since no case is provided about platform Y’s usage, it’ll be up to the Git team to investigate (or not) before deciding
> Language X will prevent Git from being deployed on platform Z, which affects W installations based on telemetry and recent package downloads, due to incompatibility Y.
This would be guaranteed to be evaluated, but the outcome could be anywhere from “X will be dropped” to “Y will be patched” to “Z will not be supported”.
I don't think Republicans characterize it that way. I find myself right leaning, and while I like most of Trumps agenda, I often disagree with his tactics. I increasingly have to fact check stuff like this to see if he crossed a line. The problem is there's too much BS, summarizing, and mischaracterising going on. Direct quotes (sometimes requiring context) are needed to get to the bottom of things. It's exhausting. I will say, right or wrong the left brought this on - it is a response to their bad behavior.
>> The issue is you can't use the "I'll give you a deeper discount for a 3 year contract" line anymore.
I don't see why not. You can still offer a low price right? You can still promise it won't go up for 3 years. But the customer can now cancel at any time.
Well, if you bought hardware to support that client for 3 years and the client changes their mind 2 months later then you are stuck with this hardware....
You can, but the thing that you get as a SaaS company in exchange for that 3 year discount is the certainty that you'd have a customer for 3 years. There is no longer an incentive to offer the discount.
But there is once you remain competitive, if the reason for people staying on your platform is because they're locked in for 3 years then you're not offering a good enough service as others. This opens the market to competition who will offer better services or prices. The free market at work.
First off, we all know that unfortunately just offering the best product is not enough to guarantee that a customer will pick you and stay with you. Secondly, because of the use of ARR as a key metric that lock in has an extra benefit to the SaaS provider in dealing with investors.
Sure, but again the issue is that this is the incentive behind those discounts and when that incentive goes away justifying that discount is that much harder.
Cutting products that don't have 50 percent margins seems like a bad choice when their goal should be filling their advanced fabs. Keeping that investment at or near capacity should be their goal. They said they'd have to cancel one node if the foundry business couldn't get enough customers, and yet they're willing to cut their own product line? Sure they need to make a profit, but IMHO they should be after volume at this point.
Even the Arc B580 GPUs could have been a bigger win if they were actually ever in stock at MSRP, I say that as someone who owns one in my daily driver PC. Yet it seemed oddly close to a paper launch, or nowhere near the demand, to the point where the prices were so far above MSRP that it made the value really bad.
Same as how they messed up the Core Ultra desktop launch, of their own volition - by setting the prices so high that they can’t even compete with their own 13th and 14th gen chips, not even mentioning Ryzen CPUs that are mostly better in both absolute terms and in the price/perf. A sidegrade isn’t the end of the world but a badly overpriced sidegrade is dead on arrival.
Latvia, when it released over here it was around 350 EUR for a LE B580, not from an individual scalper but a regular e-commerce store that sells all sorts of parts. For comparison's sake, a regular RTX 3060 12 GB (not Ti) was 300 EUR around that time.
In other countries, any place that sold them at near-MSRP very much was out of stock immediately (not that there was that much of it to begin with, which is my critique), leaving other vendors to raise the prices to around 350 USD and the more greedy ones and scalpers all the way up to 400 USD, I saw those listings myself:
Whether anyone actually bought those is another question entirely, but a GPU that's only available to you at 350 USD but should have been sold for 250 USD (and performs well for that price) is just plainly bad value.
It is nice that since the prices have dropped, however the hype around it is already past, so we're in the long tail of the product's lifecycle, especially with things like the CPU driver overhead being identified and getting lots of negative press. They had a chance to move a lot of units at launch thanks to positive reviews from pretty much everyone... and they fumbled it.
> Cutting products that don't have 50 percent margins seems like a bad choice when their goal should be filling their advanced fabs.
It seems like a bad choice at all times. A product with a 45% margin -- or a 5% margin -- is profitable. It's only the products with negative (net) margins that should be cut.
And meanwhile products with lower margins are often critical to achieving economies of scale and network effects. It's essentially the thing that put Intel in its current straits -- they didn't want to sell low-margin chips for phones and embedded devices which gave that market to ARM and TSMC and funded the competition.
Well, often you cut 5% margin product because you should focus your people and your capability on growing your 50% products. Sure if the 5% products are well established keep selling them, but usually in tech, you need to continue to invest in the 5% product to keep t up to date.
Intel did this for memory in the 80s. Memory was still profitable, and could be more so again (see Micron), but it required much investment.
But Intel might not be in this position, and filling the fabs by itself can defiantly be worth it.
But if you don't have the capacity in the new fab, maybe that isn't an issue, so its hard to say from the outside.
Most of this is in properly accounting for capital costs (i.e. interest on borrowed money) when calculating net margins. If you have to invest capital in something then the interest cost between when you make the investment and get the return goes into the formula, but the number that comes out at the end is still going to have a plus sign or a minus sign in front of it and that matters more than the magnitude.
It's usually not about the number of people. If you have two projects and both of them are profitable then you can hire more people and do both, even if one of them is more profitable than the other. The exception would be if that many qualified people don't exist in the world, but that's pretty rare and in those cases you should generally divert your focus to training more of them so you don't have such a small bus factor.
Another common mistake here is the sunk cost fallacy. If you have to invest ten billion dollars to be able to do both X and Y and call this five billion in cost for each, and at the end of that one of them has a 5% net margin and the other a 75% net margin, or even if the first one has a -5% net margin, you may not be right to cancel it if you still have to make the full ten billion dollar investment to do the other one. Because it's only -5% by including a five billion dollar cost that you can't actually avoid by canceling that product, and might be +20% if you only include costs that would actually be eliminated by canceling it.
I agree, but the issue with hieing people is that your pipeline is also limited, along with other functions. So you need to heir more people to heir more people. You need more admin staff and so on. And the management needs to focus on that and build up the organization.
I also don't think its not as rare as you suggest finding people. Depending on your location and industry. It takes time to add people. Good people existing somewhere on the world wasn't enough, specially before remote work.
Also if you can grow your 50% margin business even a little bit faster by focusing on it, over the 5% margin business. It doesn't take that much focus for that to be the better choice. So if to achieve this 5% margin, lots of your best people work it, shifting those to the larger margin business might make sense.
But I agree, if you are a mature company that has the needed infrastructure to keep that product alive then doing so makes sense. Specially because maybe in the future it could be a more important better business.
That is arguable. Regardless of everything else, currently Intel stock is up about 50% from recent averages. If investors were so hurt, they really should be selling right now, and there seems to be reason to do so because Intel's troubles have not gone away with this Nvidia stake that does not touch Intel's rotten underbelly.
I don't think this is Intel trying to save itself, it's nVidia. Intel GPUs have been in 3rd place for a long time, but their integrated graphics are widely available and come in 2nd place because nVidia can't compete in the x86 space. Intel graphics have been closing the gap with AMD and are now within what? A factor of 2 or less (1.5?)
IMHO we will soon see more small/quiet PCs without a slot for a graphics card, relying on integrated graphics. nVidia has no place in that future. But now, by dropping $5B on Intel they can get into some of these SoCs and not become irrelevant.
The nice thing for Intel is that they might be able to claim graphics superiority in SoC land since they are currently lagging in CPU.
Way back in the mid-late 2000s Intel CPUs could be used with third party chipsets not manufactured by Intel. This had been going on forever but the space was particularly wild with Nvidia being the most popular chipset manufacturer for AMD and also making in-roads for Intel CPUs. It was an important enough market than when ALi introduced AMD chipsets that were better than Nvidia's they promptly bought the company and spun down operations.
This was all for naught as AMD purchased ATi, shutting out all other chipsets and Intel did the same. Things actually looked pretty grim for Nvidia at this point in time. AMD was making moves that suggested APUs were the future and Intel started releasing platforms with very little PCIe connectivity, prompting Nvidia to build things like the Ion platform that could operate over an anemic pcie 1x link. There were really were the beginnings of strategic moves to lock Nvidia out of their own market.
Fortunately, Nvidia won a lawsuit against Intel that required them to have pcie 16x connectivity on their main platforms for 10 years or so and AMD put out non-competitive offerings in the CPU space such that the APU take off never happened. If Intel had actually developed their integrated GPUs or won that lawsuit or if AMD had actually executed Nvidia might well be an also-ran right around now.
To their credit, Nvidia really took advantage of their competitors inability to press their huge strategic advantage during that time. I think we're in a different landscape at the moment. Neither AMD nor Intel can afford boot Nvidia since consumers would likely abandon them for whoever could still slot in an Nvidia card. High performance graphics is the domain of add-in boards now and will be for awhile. Process node shrinks aren't as easy and cooling solutions are getting crazy.
But Nvidia has been shut out of the new handheld market and haven't been a good total package for consoles as SoC both rule the day in those spaces so I'm not super surprised at the desire for this pairing. But I did think nvidia had given up these ambitions was planning to try to build an adjacent ARM based platform as a potential escape hatch.
> It was an important enough market than when ALi introduced AMD chipsets that were better than Nvidia's they promptly bought the company and spun down operations.
This feels like a 'brand new sentence' to me because I've never met an ALi chipset that I liked. Every one I ever used had some shitty quirk that made VIA or SiS somehow more palatable [0] [1].
> Intel started releasing platforms with very little PCIe connectivity,
This is also a semi-weird statement to me, in that it was nothing new; Intel already had an established history of chipsets like the i810, 845GV, 865GV, etc which all lacked AGP. [2]
[0] - Aladdin V with it's AGP Instabilities, MAGiK 1 with it's poor handling of more than 2 or 3 'rows' of DDR (i.e. two double-sided sticks of DDR turned it into a shitshow no matter what you did to timings. 3 usually was 'ok-ish' and 2 was stable.)
[1] - SIS 730 and 735 were great chipsets for the money and TBH the closest to the AMD760 for stability.
[2] - If I had a dollar for every time I got to break the news to someone that there was no real way to put a Geforce or 'Radon' [3] in their eMachine, I could have had a then-decent down payment for a car.
[3] - Although, in an odd sort of foreshadowing, most people who called it a 'Radon', would specifically call it an AMD Radon... and now here we are. Oddly prescient.
I'm thinking the era of "great ALI chipsets" was more after they became ULi in the Athlon 64 era.
I had a ULi M1695 board (ASRock 939SLI32-eSATA2) and it was unusual for the era in that it was a $90 motherboard with two full x16 slots. Even most of the nForce boards at the time had it set up as x8/x8. For like 10 minutes you could run SLI with it until nVidia deliberately crippled the GeForce drivers to not permit it, but I was using it with a pretty unambitious (but fanless-- remember fanless GPUs?) 7600GS.
They also did another chipset pairing that offered a PCI-Ex16 slot and a fairly compatible AGP-ish slot for people who had bought an expensive (which then meant $300 for a 256MB card) graphics card and wanted to carry it over. There were a few other boards using other chipsets (maybe VIA) that tried to glue together something like that, but the support was much more hit-or-miss.
OTOH, I did have an Aladdin IV ("TXpro") board back in the day, and it was nice because it supported 83MHz bus speeds when a "better" Intel TX board wouldn't. A K6-233 overclocked to 250 (3x83) was detectably faster than at 262 (3.5x75)
ALi was indeed pretty much on the avoid list for me for most of their history. It was only when they came out with the ULi M1695 made famous by the Asrock939dual-sata2 that they were a contender for best out of nowhere. One of the coolest boards I ever owned and was rock solid for me even with all of the weird configs I ran on it. I kind of wish I hadn't sold it even today!
I remember a lot disappointed people on forums who couldn't upgrade their cheap PCs as well, but there were still motherboards available with AGP to slot into for Intel's best products. Intel couldn't just remove it from the landscape altogether (assuming they wanted to) because they weren't the only company making Intel supporting chipsets. IIRC Intel/AMD/Nvidia were not interested in making AGP+PCIe supporting chipsets at all, but VIA/ALi and maybe SiS made them instead because it was a free for all space still. Once that went away Nvidia couldn't control their own destiny.
nvidia does build SOCs already. The AGXs and other offerings. I'm curious why they want intel despite having that technical capability of building SOCs.
I realize the AGX is more of a low power solution and it's possible that nvidia is still technically limited when building SOCs but this is just speculation.
Does anybody know actual ground truth reasoning why Nvidia is buying Intel despite the fact that nvidia can make their own SOCs?
Sometimes HN users appear to have absolutely zero sense of scales. Lifetime sales numbers of those are like hours to days worth equivalent of Switch 2.
I didn't do the WASM port. It was started by our previous lead whitequark, and some more work done by a few others. It's not quite complete, but you can do some 3d modeling. Just might not be able to save...
Oops, I did not read that before going ham in the editor. It seems that the files are stored inside the emscripten file system, so they are not lost. I could download my exported 'test.stl' with the following JavaScript code:
var data = FS.readFile('test.stl');
var blob = new Blob([data], { type: 'application/octet-stream' });
var a = document.createElement('a');
a.href = URL.createObjectURL(blob);
a.download = 'test.stl';
a.click();
One thing worth noting, everything but the menus and popups is drawn with openGL. The text window uses GNU unifont which is a bitmap font. All the interactions are handled the same way as the desktop versions.
Sounds like that would be something on a different abstraction level than WebAssembly itself, or are there some blockers in the platform that preclude a OpenMP implementation targeting WebAssembly?
I just want to say that Solvespace is amazing, I was able to follow a tutorial on YouTube and then design a split keyboard case for CNCing, all pretty quickly a few years ago.
We use something similar in Solvespace. Where one might have used polymorphism, there is one class with a type enum and a bunch of members that can be used for different things depending on the type. Definitely some disadvantages (what does valA hold in this type?) but also some advantages from the consistency.
VB's Variant is a bit "assume I know what this is" dynamic typing, almost Python-like duck typing. It sounds like your solution is also more akin to duck typing.
Sum Types are still themselves Types. The Sum Type itself like `int | string` is a Type. That Type generally only provides the intersection of methods/abilities, which depending on the language might only be a `toString()` method and the `+` operator (such as if the language uses that operator for string concatenation and also supports implicit conversions between int and string). The only safe things to do are the methods both types in the Sum support, so that intersection is an important tool to a type system with good Sum Types support.
But the other part of it is that Sum Types are still made up of types with their own unique shapes. When you pattern match a Sum Type you "narrow" it to a specific type. You can narrow an `int | string` to an `int` and then subtract something from it or multiply something with it. You can narrow it to a `string` and do things only strings can do like `.split(',')`. Both types are still "preserved" as unique types inside the Sum Type. A good compiler tests your type narrowing in the same way that a good OOP compiler will typecheck most of your casts to higher or lower types in an inheritance tree.
Given those cast checks, it's often more the case that you want to implement Sum Types as inheritance trees with casts, because the compiler will be doing more of the safety work. You write a base-class, probably abstract, to represent the Sum with only the intersection methods/operator overloads, then leaf classes, probably sealed/final, for each possible interior type the Sum can be, and use type checks to narrow to the leaf classes or cast back up to the SumType as necessary.
That's actually a lot of work in most OOP languages to build. The thing about Sum Types in languages that support them is that a lot of that from pattern matching/narrowing, to computing the intersection of two types, is free/cheap/easy from those compilers. The compiler understands `int | string` directly, rather than needing to build by hand a class IntOrString with sub-types IntOrStringInt and IntOrStringString. (It start to get out of hand when you start adding more types to the sum. IntOrStringOrBoolIntOrString for narrowing from `int | string | bool` down to `int | string`, for instance, which in OOP maybe can't be the same type as IntOrString alone depending on how you want to narrow/expand your sum types.)
(Also, not to get too much into the weeds but building a class structure like that is also more like a specialization of Sum Types called Discriminated Unions because each possibility in the Sum Type has a special "name" that can be used to discriminate one from the other. A lot of people that want Sum Types really just want Discriminated Unions, but you can build Discriminated Unions easily inside a Sum Type system without native support for it, but you can't as easily build richer Sum Types with just Discriminated Unions. In Typescript you will see a lot of Discriminated Unions built with a `type` field of some kind, often a string but sometimes an enum or symbol, which would also resemble your solution, but again the key difference is all the types that are summed together may have entirely different shapes and Typescript supports the sum type intersection calculations automatically, so maybe the top level sum just has the `type` field itself and nothing else, but you can use that `type` field to narrow to very specific shapes.)
https://altair.com/inspire
It identifies features as such even from step.
reply