Hacker News new | past | comments | ask | show | jobs | submit | eric__cartman's comments login

Android studio became great after I upgraded from 16 to 48 gigabytes of ram. Thanks for that, Gradle!


Honestly I don't see much of a problem if this is applied to imports of single items by an end user. I used to be that I had trouble importing some device partly because the power supply was not certified by the local regulatory entities. Most of what people import in single quantities are electronics with switch mode power supplies that work from 100-240v and at 50/60hz. I doubt many people are importing a hairdryer or a toaster. Personally if a power supply is approved by the FCC or some other important entity I consider it good enough for my personal use, even if it has a foreign plug.

It is a problem for importing large quantities to resell though, I'm not defending the ability to import 100s of death traps and sell them to people.


As I understand it, the most frequent type of importation by item is wholesale, we can be talking about the import of 500k phone chargers.

I think plug types are not a great risk as users will usually not want that. But in my head the risk is that we import 500k of something that technically works, but is off spec by 10V or 10hz, or the tolerance specs are too wide or too small. It's obvious how too small of a tolerance can cause issues, but too wide isn't ideal either, as there's tradeoffs, you end up importing swiss knife products. Which makes sense for big expensive electronics, but stuff like phone chargers? Subterranean or aerial cabling?

The task of verifying the quality of something is distinct from the task of verifying that it conforms to the local standards. And I wouldn't put it past the cargo culting governments to figure that if it's good enough for the US it's good enough for us.


> Performance varied between a 4.3 percent slowdown to a 2.3 percent speedup depending on the way it was bent.

I have practically zero knowledge on the physics behind semiconductors to try to think why this could occur but I find it fascinating nonetheless.


My expectation is that the core clock circuit has its capacitance and/or inductance change, this changing the timing of the clock.

+/-5% is a region where everything in the digital domain probably still works. Your rise/fall time and dead-time / other critical timings need to be robust against some degree of variability. Transistors can have rather wide manufacturing variability after all (certainly wider than 5%).

So everything still works but the core clock is changing. Which btw, happens in traditional silicon circuits as they heat up or cool down.

A low precision RC oscillator changing by 5% or so between 20C and 100C is within expectations. I'm fact, a -50%/+100% change wouldn't surprise me.

--------------

Old var-caps (variable capacitors) by twisting them tighter or looser. No joke. So that's where my expectation that they've changed the capacitance of some core element that controls an important clock.


Many resistive materials, especially those that are semiconductors, have changes of resistivity caused by mechanical strain.

This so-called piezoresistive effect is frequently used for measuring the deformations of various objects, by attaching piezoresistive wires to them, which can measure for instance the amount of bending of the object.

Such a flexible integrated circuit might also have changes in the resistance of the transistor channels or of the interconnection traces, which will change the maximum permissible clock frequency. If an RC oscillator is used to generate a clock signal, its frequency will change with the bending of the circuit, more likely due to variations of the resistance than of the capacitance, because it is not likely for the bending to cause large variations in the thickness of the dielectric of the capacitors or in the area of the electrodes, even if that is also possible.

The variable capacitors whose capacitance is changed by twisting have this behavior because their electrodes overlap only partially and the twisting changes the area of the overlapping region. No such thing happens when twisting or bending a normal capacitor.


> which will change the maximum permissible clock frequency.

Emphasis on _permissible_ clock frequency. Because how is the core logic supposed to figure out how much the clock frequency changed or how much the resistance of the wires have changed?

> because it is not likely for the bending to cause large variations in the thickness of the dielectric of the capacitors or in the area of the electrodes, even if that is also possible.

Yes but no. Everything you said is correct, but you're looking at the wrong dielectric. The plastic PCB is obviously unchanging, even as it gets balled up.

However, there's another dielectric here that's normally ignored that suddenly becomes relevant. The _relevant_ dielectric (to this discussion) is the air. As the capacitor rolls up into a cylinder shape, the copper-air-copper capacitor has the dielectric (air) get thinner-and-thinner.

-------------------

However, to your point that this is "resistance"... the fact that "rolling one way" leads to -speed and "rolling the other way" leads to +speed suggests that its a resistance issue. Because the spring/resistance relationship is known. So stress/tension causes resistance of copper to grow, while pressure causes resistance of copper to drop.

If the oscillator is an RC-type oscillator (ex: a 555-timer like oscillator), then yes, I can see the resistance theory playing out. And 60kHz is slow enough that RC-type oscillators are possible.


> Because how is the core logic supposed to figure out how much the clock frequency changed

It is frequent for such logic circuits to use clock generators made with a so-called ring oscillator, i.e. with a chain of inverters containing an odd number of them, which is connected in a loop. The clock period will be a multiple of the delay through a logic inverter.

In this case the actual clock frequency tracks exactly all changes in the permissible clock frequency, regardless of their causes, including temperature and mechanical deformation.

> As the capacitor rolls up into a cylinder shape, the copper-air-copper capacitor has the dielectric (air) get thinner-and-thinner.

I am not sure which is the copper-air-copper capacitor to which you refer. On a PCB, there are parasitic copper-air-copper capacitors between traces, but they have very little influence on clock frequencies. On a normal integrated circuit, there is no air. The metal layers are separated by insulator layers and the top metal is covered by a passivation layer. This flexible circuit should also be covered by some passivation layer.

Replacing in your argument the copper-air-copper capacitor with a copper-insulator-copper capacitor, any circuit has two kinds of capacitors, those that are made intentionally, with two overlapped metal electrodes and a very thin insulator layer between them, and the parasitic capacitors that exist between any metal traces.

Your argument is valid for the parasitic capacitors, because the distance between traces will vary with bending and some parasitic capacitors will become larger, while others will become smaller. The effect of each of the parasitic capacitors on the permissible clock frequency is small and the global effect of all parasitic capacitors is unpredictable without a concrete circuit layout, because their changes with the bending may compensate each other.

For an intentional capacitor, the effect mentioned by you also exists, but in most technologies for integrated circuits the thickness of the insulator of the capacitors is very small in comparison with the lengths and widths of the electrodes. In this case only a very small part of the electromagnetic field is outside the internal space of the capacitor and its influence on the value of the capacitance is negligible. Perhaps the capacitors made with this flexible technology are not as thin in comparison with their area as in other technologies, in which case the effect mentioned by you could be measurable, but I doubt it.


Neither do I, but I can tell you if you manage to bend a normal CPU die the performance loss is 100% (because you broke it).


An iPhone 8 still has a lot of processing power for headless home server tasks. I use a much weaker ARM dev board as an ssh gateway and Wireguard VPN into my home network and it works just fine. The only thing I'd worry about is leaving the battery on the phone and having it puff up after being trickle charged for months on end.

But if you remove the battery and mod the phone to power it directly from an external power supply you're all set!


I had to slightly rewrite the title to get it to fit within the 90 character limit.


Perhaps rewrite the title to: "Mangled fingers, no time off: Women who make Samsung semiconductors go on strike"


It has been edited since the submitted title GP was talking about, which was something like your suggestion; possibly by mod team to desensationalise (I don't know).


I've always had a feeling that mixing caffeine and alcohol was a really bad thing, even when binge drinking as a teenager. Not that limiting myself to only alcohol was a healthy alternative or anything...

I wonder how much damage (if any) that caused considering I didn't do it very frequently. And how much damage could it do to someone that does it every weekend during their late teenage years?


If you're unlucky enough to experience memory errors in one of the intermediate buffers files go through while being copied from one computer to another an incorrect copy of the file might get written to disk.

When running software RAID, memory errors could also cause data to be replicated erroneously and raise an error the next time it's read. That said if the memory is flaky enough that these errors are common it's highly likely that the operating system will crash very frequently and the user will know something is seriously wrong.

If you want to make sure that files have been copied correctly you can flush all kernel buffers and run diff -r between the source and destination directory to make sure that everything is the same.

It's probably way more likely to experience data loss due to human error or external factors such as a power surge than bad ram. I personally thoroughly test the memory before a computer gets put into service and assume it's okay until something fails or it gets replaced. The only machine I've ever seen that would corrupt random data on a disk was heavily and carelessly overclocked (teenage me cared about getting moar fps in games, and not having a reliable workstation lol)


I wonder whether something like Syncthing would notice a hash difference with data corruption caused by such a memory error? And whether it’d correct it or propagate the issue…


I still use an HP 48GX for all my university courses that require a calculator (not many nowadays, the best math courses are the ones without numbers in them). I bought it used in 2020 and it's been one of my favorite sub $25 purchases I've ever made!


I wish I had this all those times I had to guess what buffer exploded just by looking at the Valgrind output signaling invalid reads/writes.


I have noticed with two 7 year old Snapdragon 835 devices that RAM and running a recent Android version makes a huge difference.

I daily drive a OnePlus 5 running Android 14 through LineageOS and the user experience for non-gaming tasks is perfectly adequate. This phone has 6GB of ram, so it's still on par with most mid-range phones nowadays. My only gripe is that I had to replace the battery and disassembling phones is a pain.

Meanwhile a Galaxy S8 with the same SoC, 4GB of memory and stock Android 9 with Samsung's modifications chugs like there's no tomorrow.

I can understand that having two more gigabytes of memory can make a difference but there is a night and day difference between the phones. Perhaps Android 14 has way better memory management than Android 9? Or Samsung's slow and bloated software is hampering this device?

Either way it's irritating to see that many companies don't test on old/low-end devices. Most people in the world aren't running modern flagships, especially if they target a world-wide audience.


This is what I miss from the removal of serviceable components on MacBooks. Was a time I would buy the fastest processor and just okay memory and disk, then the first time I got a twinge of jealousy about the new machines, buy the most Corsair memory that they would guarantee would work, and a bigger faster drive. Boom, another 18 months of useful lifetime.


Is the total useful lifetime more than MacBooks with non serviceable components? I see people around me easily using Airs for 5+ years.


Yes, but that's the slow-boiled frog syndrome. I use my computers for years as well, and whenever I get a new one I think "wow, why didn't I switch sooner, this is so much snappier".


As a counterpoint, I have a 2015 MacBook, a 2015 iMac, and a recent Apple Silicon MacBook. Of course I do Photoshop, Lightroom, Generative AI, etc. on the Apple Silicon system. But I basically don't care which system I browse the web with and, in fact, the iMac is my usual for video calls and a great deal of my web document creation and the like.

I suspect that people who have somewhat older Macs (obviously there's some limit) who find their web browsing intolerably slow probably have something else going on with either their install or their network.


>I do Generative AI,

This makes me call into question literally everything else in your post.

You might be able to do CPU based for a few trials for fun, but you arent running LLMs on CPU on a daily basis.


I do some local image generation now and then (mostly using Photoshop). Are you happy now? My only point was that any CPU/GPU-intensive applications I run (and really most local applications) I do on my newish computer. But most stuff I run is in a browser.

The relatively little LLM use I do is in a browser and it doesn't matter which computer I'm doing it on.


I’ve been a Mac user since 2003 or so and I can confidently say my machines last 6-7 years as daily drivers then sunset over 2-3 years when I get a new computer. I always go tower, laptop, tower, laptop. They have a nice overlap for a few years that serves me well.


My MacBook Air (11-inch, Early 2014) is my only computer. I still don't feel like changing it so far...


Amateur… I am using a 2009 15’ MacBook Pro Unibody, with a swapped SuperDrive to SSD, another main SSD and RAM boosted to 8Gb. OpenCore Legacy to update to a relatively recent version of MacOS. The only thing that is so annoying is the webcam that doesn’t work anymore, and a USB port is dead also.

So sad this kind of shenanigans are not possible anymore.


Pfah, showoff. My 2005 Thinkpad T42p crawls circles around that thing - slowly. Maxed out to 2GB, Intel 120GB SSD with a PATA->SATA adapter (just fits if you remove some useless bits from the lid) and - what keeps this machine around - a glorious keyboard and 1600x1200 display. It even gets several hours on the battery so what more could you want?


Mmh… I see that we definitely have people of good taste around here.


I have one of these with a MacBook Pro 6,2 that I did the same upgrades to. However I finally decided to retire it when 2nd replacement battery swelled and Chrome stopped supporting OSX 13.

It didn't look like a good candidate for OpenCore Legacy because of the dual video cards, but it feels so gross recycling a perfectly working computer.


I run the one from 2011 (16 Gb of ram though) and it runs highly minimalistic Arch Linux. So far so good.


My air isnt that old, and I'm eyeing a new one...

I find that a lot of my work is "remote" at this point. Im doing most things on Servers, VM's, and containers on other boxes. The few apps that I do run locally are suffering (browser being the big offender).

Is most of what you're doing remote? Do you have a decent amount of ram in that air?


no, most of the work i do is local, but it's fairly easy stuff, some statistical software, excel, word, browser. And my browser is not suffering that much, perhaps because i have 8GB of ram, and i visit simple websites. Using an adblocker is fundamental tho.


i have an Air from 2011 or 2012 that is out of storage with just the OS installed. I can't update or install any other software because the most recent update installed on it capped out the storage. Low-end windows laptops (the $150-$300 at walmart type) have this same issue. 32GB of storage and windows takes 80% of the space, and you can no longer fit a windows update on it.

I still have the air with whatever the macos is, but as soon as i have a minute i'm going to try and get linux or BSD on it. I'm still sore at how little use i got out of that machine - and i got it "open box" "scratch and dent", so it was around $500 with tax. I got triple the usage out of a 2009ish eeePC (netbook)


You could try ChromeOS Flex on it?


The main thing that convinced me to get on the ARM macs is the heat and battery life(which kind of go together). It's never uncomfortable on the lap.


Controversial counterpoint: Having standardised hardware causes optimisation.

What do I mean?

In game development, people often argue that game consoles hold back PC games. This is true to a point, because more time is spent optimising at the cost of features, but also optimising for consoles means PC players are reaping the benefits of a baseline decent performance even on low end hardware.

Right now I am developing a game for PC and my dev team are happy to set system requirements at an 11th generation i7 and a 40-series (4070 or higher) graphics card. Obviously that makes our target demographic very narrow but from their perspective the game runs: so why would I be upset?

For over a decade memory was so cheap that most people ended up maxing out their systems, the result is that every program is electron.

For the last 10 years memory started to be constrained and suddenly a lot of electron became less shitty (its still shitty) and memory requirements were something that you could tell at least some companies started working to reduce (or at least not increase).

Now we get faster CPUs, the constraint is gone, and since the M-series chips came out I am certain that software that used to be useful on intel macs is becoming slower and slower. Especially the electron stuff which seems to especially perform well on M-chips


I want to research this route more but the camera is an important component to me. I suspect their is a model of phone from 5-10 years ago that has a an under-the-radar stellar camera and I would find "perfectly adequate". ("perfectly adequate" is my favored state for most tech solutions.)


Yeah the camera is the only feature that would really make me want to switch phones. In my case it's more about being a broke CS student without a job lol.

But the low-end device thing still stands. At least here in Argentina where I live most people can't buy a $1000+ phone without going into debt or saving money for a stupid amount of time to get it. Some people that really can't afford to do so still buy them though. Maybe it is reasonable for some but I never saw any appeal in spending so much money (comparatively to a monthly salary) on a non necessity. I happily spent that kind of money on a PC to use for work/study, but a phone? Nah.


Same! The camera is the only part of the phone I want to spend real money on.

Beyond personal preferences, I live and work in an area of California where people could greatly benefit from easily accessible phones so I'm interested in what's possible.


The Huawei P10+, released in 2017, has very good Leica optics, on par with much newer iPhone or Galaxy devices.

https://www.gsmarena.com/huawei_p10_plus-8515.php


I don't think the RAM is the difference-maker. The old LG phone in question is a V35, which has 6GB and a Snapdragon 845.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: