Hacker Newsnew | past | comments | ask | show | jobs | submit | fio_ini's commentslogin

I am truly sorry. I can't understand the physical networking from the pics or the description... I'm probably just missing something. There is one blue plug going from the laptop to the cisco switch or the pci wifi module? I see a blue plug going to each device. So I'm guessing everything is plugged into the cisco switch?

if you could show all the wiring and label it (according to the table below) i think it would add a lot of value for someone less familiar with these kinds of setups (like me)


Hey, op here, this was almost a decade ago, but I'll try to describe what's going on here. It's kind of a crappy picture.

* WAN connection comes in by coax, into my cheapo cable modem (off screen), and then by Ethernet into the franken-NIC sitting on top of the laptop.

* The NIC on top is a normal PCIe card, but with the bracket missing. The ExpressCard riser [1] is connected by a mini-HDMI cable, the flat black cable, which curves up, around, and back in from the left side into the laptop

* Then, the blue cable on the side of the laptop is a VLAN trunk going into the Cisco switch on port 23/24, outside the picture.

* From there, another port on the switch is setup as an access/untagged port going into one of the LAN ports on the D-Link acting as the access switch

I don't think it was set up here, but at one point I also had a dock under the ThinkPad, with the serial adapter wired up to the switch's console port so I could manage everything by ssh'ing into the router.

[1] https://www.ebay.com/itm/115721630079

Also note that all the cables were hand-crimped because I was too cheap to buy new patch cables at the time.

I was in college, and truly had more time than money back then. it's the kind of doohickey made by only somebody very young, very crazy, or a bit of both. ;)


well i remember seeing you can bring your macbook or other devices into the space as a virtual window/screen/monitor (whatever they're calling it) so I already thought I can run vscode and any other apps at first at version 1 release. but I know what you mean.


I cant stop thinking about how cool it will be to finally remove all these monitors and wires from my desk. Operating inside of a virtual space has an amazing appeal to me to de clutter desk and office space area. Scale screens bigger and smaller and not be confined to: tight windows, bringing in more physical screens, setting them all up, switching between work spaces with hotkeys. Even the virtual workspace on a single physical screen uses extra mental RAM capacity to always be remembering which space the tabs are and where. This evolution frees up my neck, my posture, minus the bit of weight on my head, I can be sitting or standing. This next development from this company I'm connected to seems to go to such a more natural way of thinking about and interacting with our computing space. Besides all the great human side of it's value added, this headset is tangible for me because it's connected to an ecosystem that I'm part of. If I were to get a Quest for example, now what? I'm connected to Facebook? Great... The price for Apple Vision is pretty high but damn it's a pretty amazing piece of technology, and blows most other major headsets out of the water, there's over 20 million pixels for each eye for God's sake. The amount of virtual real estate we're talking about is like walls of 4k screens all wired together throughout your house a hundred different HDMI/DVI/DisplayPort Adapters to GPUs, what a freakin nightmare that would be. Easily all costing well over $3.5k.


you can (over simplified, tech people yell at me or whatever, but) display your macbook screen inside Apple Vision as a screen/monitor/window, whatever it's called same way you would an app.


Gamers in space with 300fps and no lag. In the future when I get sniped in matchmaking it’ll be by an astronaut.


Imagine the cooling system!

A gaming computer can use 1000W. Google says spacecraft radiators reject 350W/m^2.

If my math is correct (no guarantees) you'd need a king-sized-bed-sized panel just to cool your rig.


I don't think that because a computer uses 1000W of power it would produce 1000W of heat though. I think if that were true then there could be no computation done since all the energy of the electrons would be converted to heat. I'm not sure what the efficiency rating of computers is, or how much is turned heat.

Not to rain on your back of the napkin math. Although I think your sentiment is right, the cooling systems for computers in space would need to be bigger since you can use convection to move the heat away.


It is all heat, except for the light from your monitor which then turns into heat when it's absorbed.

Computing isn't doing work in the physical sense. An HDD of ordered data doesn't have more potential energy than an HDD of data in a different order.


> Current state of autonomous driving is not sophisticated enough to predict cars crossing lanes that's turning and crossing your lane from oncoming traffic. This is just completely false. AV software stacks absolutely can and do predict off-nominal behavior like you describe. In general they predict better than humans, too.

you're talking about using a lidar and some object detection (for car and other large objects) to predict human behavior. you obviously don't realize that the human behavior is in the other car with eyeballs, mouth, head movement, body movement. that part is 100% not being tracked. that part is 100% human behavior. and that's where 100% of car movements start. - so everything that the "AV" is predicting as what you call "human behavior" is super old information because the human in another car is full seconds ahead of doing whatever the computer is reacting to later. the computer sees objects move around and these movements are seconds after the human brain has caluclated it's behavior and already executing. the computer is not solving the human part which you claim it does.

watch this video https://www.youtube.com/watch?v=kJD5R_yQ9aw where a car is stopping in busy road trying to merge into the next lane. are you serious?

> Alas, essentially 100% of accidents happen with a human behind the wheel. Humans vastly overestimate their ability to avoid accidents when in control reply

> The computer sees cars that are going to run red lights, at great distances, with complete certainty, and without overlooking any of them. The example you have chosen is one in which the self-driving system is indisputably superior. reply

when you're merging into a right lane from a left lane for example on a two lane road the av cannot tell if the person is gonna let you in. humans are supreior because we can communicate with eachother and you can predict what the other car will do because you can see a persons eyeballs, body lang, etc and communicate with them without words which we all do as we drive. hand waves, pointing, head nods, etc. a computer has no idea. and unitl they're all networked or something it's gonna be hard.

humans communicate seamlessly and if you don't realize it, it's probably because you're a robot. but normally we look each other in the eyes, hand wave, point, etc. not always (these are the asshole drivers) but you get the point. humans are superior and always will be until computer cars are networked so they can commuicate behavior to eachother the same way humans do.

so vis-à-vis the transitions are so off an irregular that you're gonna get hit or rear ended


is there still any way you'd want to get in this? there's not anything manual and it's honestly a huge statistical gamble riding around in this thing. Even with airbags, why would you risk your life? At least a steering wheel and pedals for example so you could take over if you need to. There's no way you would be safe in the middle of an intersection if someone came flying through a red light, which as a conscious being you could see from a distance when you look both ways as far as you can see. You'd be like oh snap someone is flying the car's not gonna stop! Current state of autonomous driving is not sophisticated enough to predict erratic human behavior. An example is a car crossing over your lane that's turning left from oncoming traffic without any stop signs/lights. Human drivers will sometimes cross at the very last second turn on the brakes at the last second, cut you off at the last second, and the car will probably slam on the brakes so hard it will send you from the second row to the first row or break your neck from whiplash even if you're only going 20-30mph, it's a hard stop.


> Current state of autonomous driving is not sophisticated enough to predict cars crossing lanes that's turning and crossing your lane from oncoming traffic.

This is just completely false. AV software stacks absolutely can and do predict off-nominal behavior like you describe. In general they predict better than humans, too.


you're literally talking about using a lidar and some object detection (for car and other large objects) to predict human behavior. you obviously don't realize that the human behavior is in the other car with eyeballs, mouth, head movement, body movement. that part is not being tracked. that part is human behavior. and that's where the car movements start. so everything that the "AV" is predicting is super old information because the human is full seconds ahead of doing whatever the computer is reacting to later by seeing a large object move seconds after the human brain has caluclated it's behavior. the computer is not solving the human part which you claim it does.


this is something that humans do seamlessly and you don't realize because you're probably robot. but we look eachother in the eyes, hand wave, point, etc. not always (these are the asshole drives) but you get the point. humans are superior and always will be until it's completely computer cars on the road that are networked so they can be networked the same way humans are networked to gether.


The computer sees cars that are going to run red lights, at great distances, with complete certainty, and without overlooking any of them. The example you have chosen is one in which the self-driving system is indisputably superior.


are you a robot or real person? i'm jc . when you're merging into a right lane from a left lane for example on a two lane road the av cannot tell if the person is gonna let you in. humans are supreior because we can communicate with eachother and you can predict what the other car will do because you can see a persons eyeballs, body lang, etc and communicate with them without words which we all do as we drive. hand waves, pointing, head nods, etc. a computer has no fucking idea. and unitl they're all networked or something it's gonna be hard.


the transitions are so off an irregular that you're gonna get hit or rear ended


Alas, essentially 100% of accidents happen with a human behind the wheel. Humans vastly overestimate their ability to avoid accidents when in control


dude literally watch this video https://www.youtube.com/watch?v=kJD5R_yQ9aw where a car is stopping in busy road trying to merge into the next lane. are you serious?


the tech is so bad still


I sit down at the coffee shop. It's so magical... the way it wisps across the room as I pull it out of my pocket. The wind blows and brushes a beautiful brunette woman's hair across her face. She smiles at me. Is that a GPD Micro PC? Yes I say smoothly as she touches my thigh and asks if she can borrow it. I want to email my dad and tell him that I've found the man that shall hold my hand in marriage.

A man walks out of the bathroom and over to the brunette woman. Honey are you ready to go?

a devil appears. only in your dreams will this laptop do anything worth writing home to mom about plus shes with me now. the keyboard is cramped and the screen is so tiny. it's cool for about a day but welcome to hell. if you want to feel trapped then stay here forever. your fingers can try to shift windows around navigate your home space.

as the dream turns into a nightmare i wake up from my deep slumber. how did I get here?


Jeff Bezos whispered in my ear: 256GB model is $700+ CAD


so the ebay thinkpad still reigns king. Hah, "cheap"


You can accomplish the same thing using environment variables. You can pass them in at build time or deploy time. `ENVIRONMENT=dev docker-compose up` or `docker-compose build --build-arg ENVIRONMENT=dev` and in your case maybe you write some function `find_answer() { something-async ... }; ENVIRONMENT=$(find_answer) docker-compose ...` You can also read in .env files... Which in my opinion is cleaner since you can check the configurations into source control for YAML or .env files which is easier to read than hundreds of lines of bash and it's esoteric syntax. I love bash but I'm just sayin.


It's a simple example, maybe too simple to get the point. What I want is to conditionally set things on a yaml file. If you have ever used a templating language, you know this problem

    services:
        redis:
            ...
    
        {{ if some_condition }}
            volumes:
                bla bla
        {{ end if }}
    
        {{ if some_other_condition }}
        another_service:
            ...
        {{ end if }}
    
    volumes:
        {{ for volume in volumes }}
            - ...
        {{ end for }}


The compose file is declarative for clarity, so instead of fighting it, do the things you want to do at other levels. Variables? Pass in some env vars. Conditionals? You can specify multiple configuration files so that whatever calls `docker compose up` can deal with the conditionals of deciding which config files to include.


no haha I get the point but it's just another example I see someone over engineering a problem where it should be declarative. In what use case would you dynamically provision some arbitrary n number of volumes without knowing the path names to them? for volumes why wouldn't you just use "volume_name:/some/path" ... "another_volume:/another/path" and let docker manage the volume for you? you're going to end up writing in a different file "/some/path,/another/path" so why not just put it in the compose file?Also why wouldn't you just mount the root path and put n paths inside of them instead of making them all volumes? I'm struggling to see that you're use case actually needs such a feature and you're solving a problem for something that's actually broken somewhere else. you're adding some virtual layer that's making things harder to manage or understand because you don't understand all the features of docker compose technology instead of fixing the root of the issue. I'm not convinced that you're "feature" is necessary.


Everything is simple until someone asks you to try and diagnose (and hopefully fix) a bug on an upgrade between two different versions of a piece of software that can be deployed using 5 different strategies or that has 20 moving services...


your volume example is an anti-pattern in regard to something like the 12 factor app. you're conditionally changing and setting the volumes based on an environment your in? your code and service is gonna be busted. how would you even know if anything worked in prod if it's not reproducible in dev since # if dev then volume xyz else volume abc #


It's an anti-pattern just because you say so. This is a simplified example of a complex problem that needs arbitrary volume mounts. Using volume mounts to run software inside containers is a common dev pattern on scenarios where rebuilding takes too much time to quickly iterate. I am sorry, but unfortunately I cannot expand the issue further here to help you understand it, and I am happy too if you think I am over-engineering compose.


I honestly don't care if it's you misunderstanding or your teammates are building a bad framework/foundation to iterate upon. Doesn't matter to me either way bud. But you're going against the premise of docker and docker-compose which is write once and run anywhere because the services will be different every time you deploy since they'll have have variable path and file locations etc. - that's why it's an anti-pattern not because "i say so"


the point of docker-compose is partly so that your stack is ephemeral? you stand things up and shut it down with one simple switch and it can move between different environments and use the same declaration as the last environment. so now I gotta drag your bash script around and embed my yaml file in it? lol


The point of docker compose is to orchestrate containers. It's great for you to have lines that you always color inside, but other people choose different lines.


you're coloring outside the lines and making a mess you should grab a different coloring book this ones not for you.


Podman allows for Kubernetes templates, so theoretically you could use Helm for this potentially.. otherwise, Minikube on Podman. I wish they had Go templating in the compose files though, that would be rad.


Why do you want to set things conditionally like that?

This feels like it's a very incorrect solution to a problem that you're making far more complicated than it needs to be.


I apply this solution exclusively on the development environment. I have found that having the power to bail out to a shell whenever docker compose does not support something helps me iterate faster on my local dev (without affecting others).

The following is just a contrived example. Why would I do that? Because I can

    # Bring up services without a command
    #
    # usage:
    # nocmd.sh up -d service_name
    
    function _compose {
      docker-compose -f docker-compose.yml $@
    }
    
    function nocmd {
    cat << EOF
      services:
        $1:
          command: tail -f /dev/null
    EOF
    }
    
    function nocmd_compose {
      local service=${@: -1}
      _compose -f <(nocmd "$service") $@
    }
    
    nocmd_compose $@


I can punch myself in the dick but I don't


I got locked out of various MFA sites when I got a new device and had to redownload my Authenticator app it wasn’t linking to the accounts anymore and I could not get in. It became a chicken and the egg problem. Eventually going through support channels for each site I was locked out of I was able to get back in but it was a giant headache.

I vaguely remember someone posting about a hypothetical scenario where their house burnt down and they lost all their physical devices and couldn’t get into anything etc.


>I vaguely remember someone posting about a hypothetical scenario

This one? "I've locked myself out of my digital life": https://news.ycombinator.com/item?id=31652650


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: