> The more software developers there are, the easier it becomes to automate the drudgery and toil of life.
By "easier", you must mean "cheaper", but I don't believe that at all. There are now more software developers out there than ever before, yet project costs are the highest they have ever been and failure rates remain high.
Software developers are preoccupied with creating problems for themselves, which they then get to bill clueless clients for. The more programmers there are (especially juniors), the worse it will be.
Also, don't assume people actually want automation everywhere. Automation means killing jobs. Managers don't want to fire, they want to hire - it boosts their ego. Employees don't want to get fired, when in doubt, they'll come out against more automation in their day-to-day tasks.
If the economy really needed way more programmers, their wages would go through the roof (across the country, not just in overpriced Silicon Valley) and even otherwise disinclined groups of people would go for it. As it is, the wages are commensurate with the effort required to learn the profession.
Except there isn't any problem here whatsoever. Yes, it's coupled, but it doesn't matter. No, this kind of code doesn't scale. It doesn't have to.
The time you spent writing this comment was longer than what it took writing this code. Considering what kind of site this is, that code may work indefinitely. Or, if necessary, someone from the future will need to change that 3 to a 4 at some point. In terms of total cost, it's almost certainly the cheapest solution.
> It costs almost nothing more to write "event" instead of "ev" or "removeElement" instead of "remEl", but it makes the code much more readable.
No, it doesn't. Unless you're an absolute beginner, it takes you a couple of seconds to realize that in this codebase, "ev" (or "evt" or even "e") means "event". The same goes for "remEl". If that's consistent, then it's not a big deal at all.
Having less characters makes code more readable (or rather "scannable") as well, it's just another tradeoff.
>Having less characters makes code more readable (or rather "scannable") as well, it's just another tradeoff.
Only to a point, otherwise minified js would be more legible than plain source. It takes "a couple of seconds" to scan the codebase and find out what el or ev refer to, but it would take zero seconds if they were just called "event" or "element."[0] There's no gain in readability from shaving off one or two characters in that case. It just "feels" more efficient because it resembles the common style of lower level code.
[0]Assuming those aren't reserved words in javascript, I don't know.
You are allowed to use both event and element, as they aren't strictly reserved keywords.
I think it's generally frowned upon, though. For example, I believe "event" is defined as a global in browsers (on window, perhaps?), so you run the risk of shadowed variables and accidentally referring to the wrong thing.
Less characters really only makes sense in some scenarios, e.g. "unimportant" variables (for loops, temporary storage inside a procedure). Verbose 'variablesToKeepTrackOfPositionInThisLoop' is obviously pointless when 'i' will do.
However renaming "important" things to make it quicker to read or type is, in my experience, a mistake if your code base is more than just a handful of files. Descriptive names make it much easier to understand the code which is more important in my experience. As the number of code files gets beyond a few files any benefits you get are rapidly lost because now you need to keep all that knowledge in your head and start having to jump around between 5, 10, 20 or more files to work out what everything is let alone what it is doing. This is why naming stuff in code is so important (and sometimes so hard).
This sort of attitude about brevity & speed appears to be a common problem with some developers I come across - they are absolutely obsessed with "productivity" when it comes to writing code. They've just got to have their 97 plugins/extensions to their editor and they simply must have their finely-honed emacs keybindings. If you are focusing on just churning out code as fast as humanly possible (i.e. "productivity"), then your job as a programmer can probably be safely replaced with a couple of shell-scripts.
I've never been in a situation where the limiting factor in programming has been how fast I can read it or write it - the limiting factor has always been understanding the problem at hand, and designing a solution that solves the problem and is maintainable.
The point I'm making is not "use short variables everywhere", it's that in this case, the verbose alternative is not "much more readable".
Abbreviating "element" to "el" or using "attr" instead of "attribute" (etc.) can significantly reduce noise in web client code. Everybody either knows what it means, or they shouldn't be editing the code in the first place.
> Having less characters makes code more readable (or rather "scannable") as well, it's just another tradeoff.
It might, in a specific circumstance, while in another circumstance it might make the code significantly less readable. That is an indication that number of characters is probably not a good metric for adjusting readability.
Couldn't agree more, sometimes too long makes things so much worse. I don't know why this is more common in Java code bases, but I've definitely seen function calls that take 6+ lines (have to maintain that 80 column limit) due to horrible naming. By the time I finish reading that, I've already forgotten what the base class is even called.
However, I've also seen large classes with members like "stN" for "set N". WTF is N, and why can't we just type set, that one char doesn't save anybody any time at all. Or nested loops with i & j reversed, just to make it especially painful.
> Its a good point, if you overlook the fact that the British destroying the Indian & Chinese economy (through war/invasion) created a vacuum for the US economy (British colony at the time) to flourish.
You're making some sort of moral argument, but the argument against protectionism isn't moral. Protectionism is stupid from an economic perspective, no matter if India or the U.S. does it. If there's a moral argument to be made, it's against protectionism: It denies citizens to partake in free trade and therefore makes them poorer.
What's in it for India if everyone has to use the (inevitably crappier) Indian version of Google, Facebook or Amazon? It's not like anybody outside of India is going to want to use that stuff, so it doesn't bring in foreign capital. It'll just reallocate local tech talent to a domestic market that is less profitable than the international market. It makes no sense.
That's exactly why people need to bring in emotions and hot words like "colonialism": To make a psychologically powerful (but irrational) case. It might just work.
> Never forget India was #1 in world GDP (for more than 1000 years) pre-invasion.
I'd be skeptical about estimates on GDP dating to 1000 A.D., but if there's anything to learn here it's that you should probably spend some of that GDP on defense, lest you get invaded by tiny-old Britain.
> Russia with Putin as its leader stands in the way of American domination of Eurasia. Hence, the military encirclement of Russia and the belligerent anti-Russian rhetoric from American leaders.
Is the US really dominating Europe and if so, is that not better than being dominated by Russia? Why did the Ukrainian civil war break out, wasn't it the desire of ethnic Ukrainians to break away from Russia? "Greater Russia" is a thing that many ethnic Russians want and Putin would take it if he believed he could get away with it.
Historically, if your nation bets against the US, you end up on the losing side. We're not going to be ever living in a world without major superpowers and the US - imperfect as it may be - is by far the best such "regime" that has ever existed in history. I'm worried about a world where the US retreats from that position, leaving it up to autocrats to divide the spoils among them.
Of course, if you're an international leftist, you're knee-deep in denial and resentment about the abject failure of every regime that ever challenged US hegemony. You crave some sort of justice that history will never deliver to you, defending some of the most oppressive countries in the world while maximizing and exaggerating the oppression at home. Cognitive Dissonance at work!
Having said that, the Democrats really need to move on from whining about the Russians interfering in the elections, it's just not effective.
Military people applying their experiences to company leadership are problematic, because they worked with a selection of people that doesn't reflect the general population.
In the general population, not everyone wants to be a leader, not by a long shot.
People tend to have a strong urge to avoid responsibility (and therefore making decisions), that's the whole point of having superiors. If everyone's fully responsible for what they do, they don't do their work.
Even an incompetent superior helps here, because at least in theory you can divert responsibility to them. It allows people do their work for the majority of the time when things don't go wrong. If they do go wrong, everyone gets to blame everyone else somehow, maybe the company tanks, but really life goes on.
You may point out problems here and there, but no other obvious alternative (like flat structures) are devoid of problems.
I'd be wary of solutions that take human responsibility completely out of the equation (such as AI). Sometimes a decision not made is actually for the best.
I think the idea is, rationalizing to one person, who can then represent your ideas to the many others with a vested interest, provides leverage to you as an employee (easier to convince one person than to convince every person). Of course, managers can do a good or bad job of this, and done poorly, can create more work for their employees.
That's not true. I'm sometimes faced with a lot of companies to choose from and if one of those companies has a badly designed logo, I'll immediately become suspicious of all of their other qualities.
On the other hand, a really good logo doesn't have the opposite effect, because I know some designer (or AI, whatever) made it for them.
I can't remember ever having been stunned by a logo.
The height of my experiences could be described as:
"Oh, nice logo." or "Hmm, clever."
The line between "clever" and "trying too hard to be clever" is also quite fine.
Some logos become iconic, but that has as much to do with the brand itself as with the logo. Even if Microsoft had the Nike logo, it would still be uncool.
So, a "feature" in the installer allows an already installed malicious app to install even more malicious apps.
That new malicious app then gets to somehow have more permissions than the original malicious app or the installer.
Guys, that's an Android bug. This is exactly the kind of thing that needs to be fixed at an OS level, you can't be relying on the competence of arbitrary developers to maintain the security of the system.
Of course it's an opportunity for Google to use their own broken security model as an argument on why apps should only come from their own "curated" channels (which presumably also host the malware exploiting this). It just so happens to be their source of revenue...
There are two ways to fix this. One is to not permit dynamic code loading or app installs off the Play Store. This is Apple territory and pisses people the hell off. The other is to not have any world writable filesystem at all. I guess you could do this, but this messes with features surrounding music and pictures that you do want to share between apps.
Epic literally could have used the private filesystem that is right there just for the purpose of having files that are protected from other apps.
No, this is a permissions problem. Reading and writing to an application directory does not cause any issues until some code in that directory starts running with privileges it shouldn't have.
It should not be possible for an application that happens to install other applications to bypass the user for specific permissions. The user must be asked explicitly.
This has nothing to do with being able to "sideload" apps or not. Sideloading apps is actually possible on iOS, it's just such an effort (getting a developer account) that it's rarely done in practice.
Yep. If I might add a footnote for HN reader convenience: Epic actually fixed it by following the second approach you mentioned.
According to Google issue tracker: "This patch changes the default APK storage directory from external to internal storage, which should prevent MITD attacks during the install flow."
By "easier", you must mean "cheaper", but I don't believe that at all. There are now more software developers out there than ever before, yet project costs are the highest they have ever been and failure rates remain high.
Software developers are preoccupied with creating problems for themselves, which they then get to bill clueless clients for. The more programmers there are (especially juniors), the worse it will be.
Also, don't assume people actually want automation everywhere. Automation means killing jobs. Managers don't want to fire, they want to hire - it boosts their ego. Employees don't want to get fired, when in doubt, they'll come out against more automation in their day-to-day tasks.
If the economy really needed way more programmers, their wages would go through the roof (across the country, not just in overpriced Silicon Valley) and even otherwise disinclined groups of people would go for it. As it is, the wages are commensurate with the effort required to learn the profession.