Hacker Newsnew | past | comments | ask | show | jobs | submit | panduwana's commentslogin

So it needs taxpayer subsidy just to be on par with fully-private Falcon 9? Seems like it loses before the match even begins.


What a narrow world view.

A fully reusable single stage to orbit "plane" completely changes the dynamics of putting stuff in orbit both on cost and turn around time.

The Falcon 9 is privately funded as is the Merlin engine it uses however pretty much all the materials science, initial engineering etc required to design and build a merlin was publicly funded.

There are times when a public/private partnership makes sense and this is one of them.


SpaceX received a subsidy from Elon Musk. The financial case for the company wasn't clear at the start - it was only because they had significant financial backing for the initial development period that they could get to where they are now. They may now be profitable, but if you'd tried to make a case a decade ago for funding you'd get nowhere. There are all sorts of high-capital technology developments that simply aren't immediately profitable or low risk enough for VC funding. Why should we as a society rely on the whims of a few wealthy people? This kind of technology, should it be realised, has a value to everyone.


Why not call it a risky investment? He ended up with a big ownership stake, he wasn't just supporting them for the hell of it.


The UK isn't supporting Skylon "for the hell of it", either. Their taxpayers would likely be pretty happy to have the next Boeing/Airbus live on their soil.


I guess it's just a pet peeve of mine when people call a private, profitable financial arrangement a subsidy.

I complain about the usage for cell phones too.


Both situations involve someone with money taking a bet on a currently non-profitable venture hoping it'll pay off in the future. SpaceX wasn't profitable when Musk put money in.


Note that one of the partners is QinetiQ, the privatized wing of the UK defence research establishment. There's a strategic interest here too.


The 'subsidy' is less than a non-SpaceX launch would cost and peanuts for a R&D project, specially one with this potential. And it's about the same amount the US military is currently spending to certify SpaceX for military launches. What about the NASA contracts? These are done with taxpayer money too.


I find it fascinating that Elon Musk hasn't chipped in since even though it's a competitor to SpaceX it is ultimately a step in the right direction for his goal of putting people on Mars and into space.

Hell he could take the engine and build a new spacecraft based on it.


I think he's happy with the engines he has, as well as the ability to say "here's what we're doing" without someone else having the final say.


Also, as rich as he is, he is not made of money. His wealth is tied up in SpaceX and Tesla.


I'm baffled. How does the need for public financing make a project "loose before the match even beings"? The project needs public financing because private investors wouldn't shoulder the risk.

Most significant leaps of technological innovation in the history of civilization required public support. And as we move on the leaps will become harder and riskier still which means even less willingness of the private investors to put up the money. So brace yourself :P


They are completely different designs, with different technology and applications. The Falcon is 1940s technology updated; the Skylon is new; there has never been anything else flown using the same engine concept before.


If the belt operation can be changed from the current "take any two items on belt, process them, put the result to the front of belt" to "take two front-most items on belt, process them, put the result anywhere on belt", we can save some bits and make shorter instructions (good for mobile):

currently: OP load-address-1 load-address-2 // output is always put at belt's front

to: OP store-address // inputs are always 2 frontmost items on belt


How do you get ILP from this? Seems like it would make scheduling much more difficult because you now need to make sure that that the results you need for each instruction are in the order in the belt you need, and you have to be able to execute any order of instruction types. The mill can run fast partly because and instruction can use any result on the belt, and similar instructions are grouped together making decode simpler (from memory, they have two separate decoders and each instruction has all say arith instructions grouped and decoded by one decode, and all the others decoded by the other).

Basically, I don't see how you can use what you suggest to do, in one instruction:

    [ add positions 7 and 5 | multiply positions 7 and 2 | call f on 4  and 5 | branch to foo if position 3 was LT else bar ]
ending up with the belt

    [ [7]+[5], [7]*[2], f([4],[5]) ... ]
or whatever you like. All you need to do to schedule the mill is to perform as many operations in parallel as the hardware can do, and then find out where their results would be placed to create the next instruction.


In the linked article it is said that "According to prior research, only some 13% of values are used more than once". So based on the research on which Mill is based on, your example and that "instruction can use any result on the belt" is actually minority case.

As for scheduling my proposed store-addressed belt: you perform as many operations in parallel, then for each operation find the other operation that depends on the former's operation result, calculate the distance between them and assign it as the former's store address. The compiler has more work to do yes, but not "much more difficult".


Offhand, I would say maybe one difference is that your model is trying to predict where the belt will be in the future while the Mill is looking backwards to find where the belt was in the past.

Another issue is that you would have to process the entire instruction in order to know where each operation gets its input. (How many operations in the instruction are taking things off the belt before I get my data?) In the Mill the operations are parsed in parallel and they have all the information they need to start processing as soon as the the instruction (block) is loaded in the buffer.

The size of the belt is a very finely tuned constraint (using simulations) that basically depends on how many cycles you have to save a value to the scratchpad memory (if needed) before it "drops off" the belt. There is a lecture that describes why it takes the number of cycles it does and if you watch it you will probably understand better why the Mill is not about what is easy or hard for the compiler but all about getting the silicon to jump through hoops fast and efficiently.


(See The Belt talk for an explanation of belt vs stack)


What country?


I am in Indonesia at present but if nothing else can help with ideas and experience.

The businesses I own are currently all on a coop model. I think subject matter is likely to be a larger limitation though than location.


Or virgins?


Sure. RBTree in C is that ugly. Take your time.


> Now why is this? It's because web apps offer a significantly poorer user experience across the board.

That is true today, but to say the issues can't never be solved is a bit of stretch:

- Speed issue: NaCl.

- Standard widgets: HTML5 already have input type=datetime, color picker, address book, and many more.

- Animation: WebGL.

- GPS: Geolocation API is available today on most browsers.

- Sensors: Accelerometer API is available today on some browsers.

Now I'm not saying that web app will certainly become the norm, but it's not impossible.


How did you learn about Clojure on GAE? What editor/IDE do you use (for Clojure)?


At first I started wanting to use GAE and a functional language. I was learning Lisp but I did not know about Clojure so my first idea was to use Python, but after some research and asking here on HN I found that Python is not really suited for FP. That's when I discovered Clojure (someone here on HN suggested it), since it compiles to the JVM, it works on GAE too.

Then I made some research for a while. At first I found only some blog posts on how to interact directly with the GAE API. I then came across The Deadline (https://the-deadline.appspot.com) here on HN, which is written in Clojure on GAE and it's a production app (I believe they do not use the library but a custom solution, but I really don't know). So I had the validation that it was possible and started coding the other parts of the project (I started with the iPhone app to have a proof of concept).

In the meanwhile time passed and I kept searching from time to time on Google for info about the subject. Some day I found that library on Google and also this article, which mentions it (again here on HN): http://www.glenstampoultzis.net/blog/clojure-web-infrastruct...

As you can see HN is a really good source for me. ;)

Regarding the editor I use Vim with colored syntax, autocompletion and rainbow parentheses. At the time I made some research and in the end I decided for Vim. To build the app I use Leiningen (appengine-magic has some Leiningen extension for building and deploying).

It works, but to tell the truth I really feel the need for an IDE for Clojure (and not some generic IDE like Eclipse or Netbeans adapted to Clojure using some plugins). I even thought about writing one myself, but for now I don't have the time and probably not even the knowledge.


It's no longer AMD's fab, they spinned off their manufacturing arm as a new company named "Globalfoundries".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: