Meaning that they didn't launch at the right time to arrive at Mars, due to where Mars is in its orbit currently. It'll pass through the imaginary circle around the sun which is Mars' orbit, it's just that Mars will at a different part of the orbit at that time.
But actually, it seems they decided to just empty the tanks and get as much delta-v as possible, and it'll go all the way into the asteroid belt as a result.
Yes, depending on time of year. They could probably have aimed to hit Mars, but stopping in Mars orbit would (probably) require another burn on the Mars end.
Regardless, like other posts have said, they haven't actually left Earth orbit yet. There's one more burn in about 5 hours.
Actually, no, once the 2nd stage cuts off the trajectory is mostly fixed and we know what orbit it is in. There are, I believe, two more small burns that will be done to adjust the trajectory, but these are more of an adjustment to what kind of Martian transfer orbit it is in. It already has the hyperbolic velocity to leave Earth's orbit, and enter solar orbit with an apogee at the same distance as Mars.
In 6.5 hours SpaceX will have finished everything they wanted to test with this flight I believe, including a number of post-launch checks of various systems and sensors on the payload, and those re-ignition tests of the 2nd stage.
Unless I'm mistaken, it's not on a trajectory out of Earth orbit currently.
It is in a parking orbit, where it will sit for a few hours and then will reignite and will be set on a trajectory toward "martian orbit".
IIRC currently they are testing (or proving depending on how confident they are) that they can have the second stage sit for several hours in space before reignighting.
The landing barge is towed into position by tugs, and once the boosters are landed I presume it is towed back out, so there will be some tug boats nearby.
It has station-keeping thrusters to stay at precise GPS coordinates. But it can't move far under own power and is towed. The name ASDS is not really to be taken literally here.
It will likely be a proper dynamic positioning system so will more than likely use two independent reference systems for position, one could be a DGPS system the other could be as basic as a taughtwire....
I don't find that surprising. The communication link back is presumably by satellite since it is in the middle of the ocean, and probably a directional antenna because of the high bandwidth. You have any good suggestions for having a reliable connection via directional antenna on a flimsy barge that a rocket is landing on in this middle of the choppy north Atlantic?
Tow a cable from the barge to a nearby ship, buoy, or platform on which the antenna sits, that's out of range of the vibrations and faces a different direction?
I believe they are required as part of FAA regulations to ensure that no manned craft are within 15 miles of the landing zone? Something like that, at least, which would complicate a tether-based approach.
I think the suggestion was a separate but tethered unmanned platform, which would presumably have less vibrations. I would imagine it's just not cost-efficient for SpaceX, as they'll be able to recover the footage later regardless.
Who needs a tether? Ubiquiti makes gear capable of slinging 400+ Mbps over 25km in a straight line. For about $1500 they should be able to shove the "last mile" to a ship outside of the exclusion zone and put the sat uplink on the ship.
They have grid antennas with a slightly wider beam, or maybe a sector style with 20-30 degrees could be used. I just think it's easier to engineer a workable ship-to-ship stabilized radio solution rather than deal with the ship-to-sat dish which probably has much tighter tolerances.
On the pictures of the droneship the quite distinctive Inmarsat BGAN (which is in essence 3G bounced of geostationary satellite) antenna pod is plainly visible.
Edit: inside the pod is fairly high-gain directional antenna mounted on motorized positioner, but it is designed to track the satellite from slow (ie. ship) or predictably moving (ie. truck) platforms and certainly cannot cope with vibrations from the landing reliably. Man-portable BGAN terminals usually have fixed antenna and officially require quite lengthly positioning process (on the other hand it will work for some value of "work" when you just throw it into car trunk and park with trunk vaguely pointing to south, but you will be lucky to get reliable phone connection, not to mention video stream, in that case).
How about a camera feed from the tow ship (with a view like the two other boosters landing)? If you can film a rocket 300km in the air, you can surely spot a rocket landing 15 miles away. Unless ofcourse you don't want the world to see it crashing into the ocean live.
Those are not at all the same. You can see different parts of the ground. You can even tell that the source of the right stream was really to the left of the other one.
You'd think it would be easy to tell. I mean, you see the other booster ignite in both views, which means they have to be looking toward each other - in opposite directions. I don't know about you, but when two people look in opposite directions they tend not to see the same things.
Posting here rather than the blog because I don't have a google account:
What about adding sshd to the minimal install? If the purpose of this is minimal installs of containers and cloud servers and such, that seems like quite an omission.
If you have SSHD in the minimal image then you have to deal with host keys.
- on embedded and some other places where minimal images are used, generating the host key on first run can cause a very significant startup delay.
- on some container environments, environments are so identical that you might not have enough entropy to generate sufficiently unique keys.
- if somebody generates a host key and then creates an image from a running container, then you might end up distributing a host key, making what should be private public.
I've probably got some of these details wrong and am going to be promptly corrected, but there are some very good reasons for excluding sshd.
I use containers as lightweight VMs in many places. Generally I see this as a way to get a minimal install that other tools can then configure appropriately, with up to date packages fetched from upstream mirrors directly, instead of installed from CD and then upgraded.
I currently use packer.io to script the creation of a bunch of server images, and for ubuntu I've missed the "minimal install CD" that other distros have. Instead packer has to download a 800MB CD image, in order to install only a few hundred megabytes of uncompressed packages in a bare-bones install, which is then provisioned using some orchestration tool that at its heart uses ssh to login to the virtual machine.
Not having SSH means you need to add in some sort of serial-attach step to manually install sshd, or hook into the install scripts to download sshd as part of the install or whatever. Either way that's additional custom work that is probably common to a great many use cases.
And a couple of internal packages which have their own dependencies (including lldp, snmpd etc) which do a variety of things including user management (active directory based), automatic registration into our asset database and monitoring systems
You're running these containers in an orchestrator, right? That should give you API access to the running container, allowing you to get shell. E.g. with kubernetes, `kubectl exec` will get you into the container.
But the sibling comment about using a Dockerfile to install/start sshd works if you're running these containers on a remote host without any kind of access to the running container.
LXD containers make fantastic replacements for VMs! Just try 'lxc launch ubuntu:'. Then 'lxc list'. And then you can either exec into, or ssh into your machine container!
Not arguing that point for this image, but I use containers as lightweight VMs, not only as stateless-horizontally-scalable-try-to-be-google app servers.
Docker is awesome for making virtual desktop systems. My dev environment + all required IDEs and apps is a docker image running xrdp, x2go, and google remote desktop, and my home directory is mounted as a volume. Works great!
Now when I need to move my dev environment to another computer (travel), I just copy the home dir, docker build, and I have the exact same environment ready to go.
My dev environment is 100% deterministic, backed up to git, and serves as documentation for what I need installed to do my work. If I find I need something else added, I modify the Dockerfile, commit, rebuild and redeploy. If something messes up my environment, destroy and rebuild is only a couple of commands away.
For the record, LXC containers are much more cooperative as non-emulated VMs than Docker containers. I'm sure this is also the case with Virtuozzo, etc. (though I haven't tried them). These other container systems function nearly identically to an emulated VM, which is what most people actually want out of containers -- thin VMs.
It's stupid that you've been downvoted. It is so tiresome to see people ask such useless questions like "what? you want to use this thing in some way that my tiny brain cannot envision? what is this madness?"
His reply was snarky and created a false dichotomy. The 90% usecase exists in between “lightweight VM’s” and google scale horizontal app deployment. Consider that the former is just as much an outlier as the latter. This post is about a pragmatic minimal Ubuntu base image, which would meet neither case well.
Again, consider that you are creating this dichotomy, not the tools, or vast majority of practitioners in this space. Docker comes with (and encourages) lots of ways to persist state, as do orchestration environments like Kubernetes and ECS and you should choose the approach that suits the problem you are solving. If you want containers as lightweight VM's, there are a ton of ways to do that and they are actively supported.
I'm reluctant to get off-topic here, because the narrative relating to the actual post should be "Does it makes sense to have openssh in a minimal ubuntu base image", to which the answer is "No, obviously".
We've moved at work to mainly Alpine based k8s containers, which is awesome, but you lose a lot of debug ability.
Thinking about it - with Linux KSM the overhead of full-fat containers based on Ubuntu (for example) isn't massive. We may have to look at the metrics I reckon.
I often boot machines connected to the public internet or to a coffeeshop / other public wifi with the Ubuntu live CD, so I wouldn't like the live CD to have an sshd with a well-known password out-of-the-box. So if you're going to have to log into the machine to customize authentication anyway, you already have enough access to `apt-get install openssh-server`.
(It would be nice to have a one-click tool that builds you a customized image with your own SSH public keys baked in. Ubuntu doesn't have to run this tool - actually there's probably a cool project idea for someone in standing up a little website to do this, either by letting you paste in public keys or grabbing them from the GitHub public API.)
http://www.hypergraphdb.org/