Its cool they are using DP for some analytics. But its not quite the holy grail Apple and its fans has been selling it as. Because any analytics campaign using DP will always eventually average out to pure noise or end up being non-anonymous.
That sounds like you are making engineering decisions by superstition. Unless you have some source of data to back it up, please don't spread your subjective misinformation.
The fact my nation is as wealthy as it is but it does not want to invest in the health of its people is insanity to me. Health insurance is essentially a bet that someone's health cost will be less than their monthly premium. Why would my country not take on that bet? It seems like something that would be way more effective if it was orchestrated on a national level. (ノಥ,_」ಥ)ノ彡┻━┻
Considering every single TV spot is selling us things to make us Fat or die sooner, I would not want to take the otherside of the bet against people getting sick.
Americans, we are all about freedom. Freedom to kill ourselves and eat twinkies. But we expect help when we are sick.
One side or the other has to budge a little. Do we give up being able to get the same insurance despite unhealthy habbits? Do we give up helping the poor and the sick?
This is why we need fact based research to understand these costs, and use a direct tax on those items to recoup those costs from those who engage in those risks.
Go skiing? Well there's a higher tax for the ticket because you have X% higher risk of breaking a leg. Twinkies right now are a grocery food so in fact it has less tax, where it probably should be taxed like booze or cigarettes (in effect a sin tax). Of course sin taxes get abused, where the money goes to the general fund rather than going to mitigation for the undesirable behavior.
I feel your sentiment. But I always thought getting browsers to adopt Dart was a crazy and impossible idea. I was actually glad when I heard they had abandoned that plan.
Flutter confuses me. Its annoying you have to use their DSL for layouts rather than HTML and what about native widgets, like WebViews or VideoPlayers?
With them drastically speeding up the Dart2JS compiler and implementing a strong typed mode, I am quite pleased with Dart's direction.
I do still fear for its future. It would be really to sad to see Dart die. The last web app I made, I was able to make the entire thing in under 90kbs, including HTML, CSS, SVGs, and JS. Dart allows me to write desktop like applications for the web with any extra overhead.
> I feel your sentiment. But I always thought getting browsers to adopt Dart was a crazy and impossible idea. I was actually glad when I heard they had abandoned that plan.
I agree; a saner approach than adding one particular language at a time to browsers is to give browsers support/hooks which arbitrary language implementors can target, without having to perform gymnastics like compiling to JS.
It looks like WebAssembly is (at least the first step) in that direction, so Dart and others can use that as and when it matures, without creating a legacy of language-specific issues for future generations of Web implementors and archivists do deal with.
> But I always thought getting browsers to adopt Dart was a crazy and impossible idea.
I don't know — browsers are already planning to adopt a non-JavaScript language (WebAssembly); it's entirely possible that maybe, just maybe they might start to support more than just the late-90s mistake which is JavaScript.
Eventually someones going to create a XML based layout language to use to construct views in the Canvas, we could even call it HTML ;)
I'm not a web developer either and I look forward to WASM and its potential. But I really like HTML, I've used lots of layout languages before and I think HTML is the best I've used. The only thing I find HTML to be obtuse about is complex animations.
Well that doesn't bode well ;) Sierra's dock is sooo buggy. I have to restart my machine numerous times a day because the dock stops working, literally doesn't work lol. I'm sure theres a way to restart just the dock app, but eh i've been quite busy. Sierra has been one of the buggiest version of Mac I can remember using.
I would surprised if they actually did tho. I know someone did some static analysis of the apps for mac and iOS and found Swift was barely used at all.
I know the first thing I thought about was does this mean Tesla's going to start writing its software in Swift. If so they are fucked :P. (BTW I write non-critical consumer software in Swift as my day job)
Swift has its problems but keeps getting better. I only get to use it for my at-home projects, since at work the portion of the codebase that is for iOS is Objective-C with no plans to switch any time soon. Old Obj-C too, and started by people used to programming in VB on MS platforms. So count your blessings.
I find Swift to be getting worse each year. While sure i'm blessed that I don't have to write applications in assemble, I have grown jaded towards Swift. If it wasn't because I am an iOS developer, I would happily not use the language. I have been slowly positioning myself away from doing iOS development. I feel like a massive corporation like Apple can provide better tools to write apps for their walled garden than what they are providing me.
ARC is a whole debate, but the one thing it is not is simple and I would argue its more error prone than a traditional GC. I've used it for most of my career and I have seen what sloppy/unaware coding can do to it.
Lattner is a well known expert on compilers. Having used Swift since its inception, I would call into question the reliability of the Swift LLVM compiler. In its current state (3.0.2) its absolutely terrible and does not back up the sentiment; "But fast-moving Silicon Valley needs a fundamental shift in quality standards when it comes to safety-critical software, and if you look closely, Lattner has been leading this charge at the language and compiler level".
I'm not claiming Swift 3.0.2 is the most reliable language ever. All systems require time to mature and it is very young historically speaking. I am saying Lattner's design principles should realize better results for safety-critical software in the long run. Compiler bugs can be fixed; programmer error is tied to design. It might be Swift, it might be something more pared down, but he's a good guy to have in charge of an autonomous driving software platform.
I'm not sure I would say Swift is the kind of robust that is needed for safety critical software, but it is a nice step forward for application code. It mostly forces you to deal with things safely while still allowing Objective-C dynamic behaviors when you need to get around the restrictions (often to interact with other Obj-C code).
So, yes I can see why one would call Lattner at the forefront of making more reliable compilers and pushing shifts in quality standards in fast-moving Silicon Valley. It is an awesome achievement to create a new language that improves both readability and safety and even more awesome to get it mainstreamed so quickly.
There are a few people who I would like to trade places with. Lattner is one of them, Musk is another. They both fulfill different parts of my long-held dreams. So I consider them to both be quite awesome. Its cool that they'll be working together too I guess.
Sure Lattner and Musk are interesting people, but I find the level of hero worship in the tech industry to be sickening.
Having used compilers for a few new languages (Rust, Go, Dart, Kotlin, Swift). Swift is the only one I've had any issues with as well as Swift seems to be the only language to have adopted the "move fast and break things" philosophy of Silicon Valley. I dunno, I just don't see the argument.
Lattner is best known for starting the whole LLVM project. Swift is just a small side project in comparison, it just got adopted by Apple for some reason.
LLVM is one of the most influential pieces of software of the past decade. Hero worship isn't good but credit where credit is due.
You're confusing language development with autonomous vehicle development. Think of the long term goal. It's desirable to move fast and fix things with language development in the near term, to achieve a more perfect design and accelerate its maturation at the temporary cost of more volatility. After this process achieves a high level of maturity, said design principles may offer a safer, more reliable programming system that would be better suited to safety-critical applications.
Additionally I'm sure we can all agree there is no substitute for maturation through time and usage in the field. Which frankly is an argument for more popular languages over obscure ones. None of the ones you mentioned are ready for safety-critical system development (including Swift 3), but which one is most likely to achieve widespread adoption and field testing in the long run?
No I'm not confusing them. I'm responding directly to the comment that Chris lattner represents a more measured approach to software development than is tradition in the tech industry.
I don't think Swift stands to gain wide spread traction outside of Apple orientated app development. Aside from a lack of stability, Apple is to well known for boxing its competitors out. I've used and loved their products my entire life and I know how annoying it is to go against Apple's grain.
>I don't think Swift stands to gain wide spread traction outside of Apple orientated app development.
It already is though, there are several Linux web frameworks etc. It's open source and community run so I'm not sure how they're planning to box out competitors from it.
There are some web frameworks that are indevelopment. That does not mean Swift has gained any traction. Also having toyed around with one, the experience was not great.
When writing a server, I would take Go over Swift anyday. It out preforms it, uses less memory, its simpiler, oh and it uses a "tradiontal" GC.
That is very much _not_ the case according to the testing I have done recently.
Swift uses a lot less memory than Go unless the program uses only trivial amounts of memory in the first place. Using interfaces in Go data structures makes the difference even more pronounced.
On top of that, all runtimes that use a tracing GC require tons of spare memory at all times unless programs are very carefully written to avoid memory allocation.
That said, Swift has a few very weak spots when it comes to memory. Most notably the String type, which is terrible on all counts, but that is a whole different story.
> On top of that, all runtimes that use a tracing GC require tons of spare memory at all times unless programs are very carefully written to avoid memory allocation.
Only if the said language doesn't allow for stack or static globals.
> Doesn't that effectively amount to manual memory management?
Not really, example in Active Oberon:
TYPE
point = RECORD x, y : INTEGER; END;
VAR
staticPoint : point; (* On the stack or global *)
gcPoint : POINTER TO point; (* GC pointer *)
noGCPoint : POINTER(UNTRACED) TO point; (* pointer not traced by the GC *)
On Active Oberon's case, those pointers are still safe. They can only point to valid memory regions, think of them as weak pointers that can also point to data on the stack or global memory.
This in safe code.
If the package imports SYSTEM, it becomes an unsafe package, and then just like e.g. Rust's unsafe, the rules are bended a bit and usage with SYSTEM.NEW() SYSTEM.DISPOSE() is allowed.
Just like any safe systems programming language, it is up to the programmer to ensure this pointer doesn't escape the unsafe package.
I still don't get how you can say that this memory is not under the GC's control but it's "not really" manual memory management either. Is it reference counting then? How does that memory get released?
It doesn't get released, unless you are doing manual memory management inside an unsafe package.
In a safe package it can only point to existing data, there isn't anything to release.
If the pointee is something that lives on the heap, it is similar to weak references. Points to GC data, but doesn't count as yet another GC root.
If the pointee is on the stack or global memory (data segment in C), then there is also nothing to release. Global memory only goes away when program dies, stack gets released on return. Memory that was allocated by the compiler due to VAR declarations, it is static.
Usually the idea is that you use untraced pointer to navigate statically allocated data structures, they are not to be exposed across modules.
ARC is a tradeoff between manual and automatic memory management. Requring a little bit more care from programmer is intentional, not a disadvantage as you picture it, it is a price for not having, you know, GC. GC is less error prone not for free but at the price of eating CPU and memory, which in the world of mobile devices equal less battery life, so it is quite desirable for iPhones and MacBooks software not to have it.
Different approaches to memory management differ in extent of how much of programmer's job they automate.
Garbage collectors are fully automatic and rarely if ever require to mind anything; automatic RC does almost everything but requires programmer to analyze and annotate some things as 'weak'; manual RC requires a lot more programmer's effort while still technically being "automatic"; and manual memory management means the programmer does everything.
Automatic/manual is a scale, not a boolean yes/no, and the point is that ARC lies on it a bit closer to manual than garbage collectors.
ARC is not an algorithm, it a language-level feature that generates retain/release calls automatically so that the programmer does not have to. Unlike GC systems where the resulting program does run an algorithm (and wastes CPU on that), with ARC the generated program is no different as if retain/release were written manually and runs no extra code.
Depends on what your point is. Both ARC and GC are approaches to limit the complexity and difficulty of memory management. As such, I think it's very reasonable to compare them, because they're different approaches to the same underlying problem.
FWIW, as someone who was a Java programmer for over a decade before learning Objective C right after ARC came on the scene, I greatly prefer ARC over garbage collection. I find the things you have to remember to think about with both ARC and GC (e.g. circular references and unintentionally strongly reachable references) to be about the same cognitive load, but the deterministic, predictable behavior of ARC means you won't have to try to debug random GC hangs that only happen in prod under heavy load and the subsequent fiddling with a million GC options to get performance to be acceptable.
Not really. They are both valid comparisons, since ARC is much easier to work with than manual management and can offer more predictable performance than GC. That said, it's also slower than manual management and can be trickier to work with than GC.
Just to point out that the Swift LLVM compiler is written in C++. So it being unreliable doesn't necessarily say much about the reliability or goals of Swift as a language.
It would be apparent if you were a regular user of the compiler. Random crashes while compiling well-formed code and performance problems with larger files/projects are quite common.
Still, I would say it's mostly usable now. It used to be a lot worse.
This is downvoted presumably for lack of information, but it's pretty much true.
The Swift compiler segfaults very frequently. I do find this amusing in that it's the compiler for a theoretically largely-memory-safe language (yes the compiler is written in C++, it's still funny). The syntax highlighter in Xcode, which is driven by the same stuff, also crashes, which breaks autocompletion and even indentation. Using Xcode, you just have to get used to it. It frequently reports the wrong error message - just something that isn't even close to related. Sometimes object files just become 0 bytes and you either need to clean (and experience the Swift compiler's blazing performance again) or go and modify that file so that the incremental compiler will pick it up.
I've found most of these to be triggered by using a lot of closures and possibly type inference. Shaking out the incorrect errors or segfaults is... not fun.
Oh? I meant it as a srs question. I wasn't sure what parts I should include. But thanks for essentially saying most of it.
I should mention I also find the community to be sorta toxic. They are so focused on Swift being the one language to rule them all and they use terms like "Swifty".
I get your point, programming language communities all have a certain level of fan-boyism. However I find Swift's community to be particularly abhorrent. I stopped partcipating when people started to ask if certain codes is "Swifty" and people would judge the merit of something on whether its "Swifty". It also got tiring with how militant they were with other programming languages, especially Java.
Heres a great interview from the ms researched that invented the technique http://www.sciencefriday.com/segments/crowdsourcing-data-whi...
One of the quotes I always liked from it is "any overly accurate estimates of too many statistics is blatantly non-private"