Hacker Newsnew | past | comments | ask | show | jobs | submit | maister's commentslogin

> What service does ham radio actually provide to society?

Ham radio allows the public to use slices of the radio spectrum.

If someone wanted to "hack" on RF projects, what should this person do if the entire spectrum was given to companies and governments + militaries instead?

If you are arguing against ham radio you are basically arguing against hacking, tinkering and experimenting.


You're proposing a false dilemma because there's a third option: make some of it ISM. Today, way more hacking, tinkering and experimenting happens on ISM bands than on HAM bands. I've written about this before so won't spam the thread: https://news.ycombinator.com/item?id=36714225


The ISM bands are much, much more limited in other countries(eg duty cycle). I agree that really interesting stuff is happening in ISM space however reallocating ham allocations doesn't seem like the right path forward.

FWIW I think repealing the symbol rate restrictions and opening up bands a bit more would go a long way towards the spirit that ham was started with. The hobby seems pretty ossified these days and it would be great to see more advancements in digital modes and stuff closer to the state of the art.


Oh yeah, I'd be happy with saner HAM rules. The problem is that the community is sufficiently ossified that it actively resists such changes under the umbrella of "defending" HAM, so reallocation seems like a more practical method of change.


Or we could just leave it alone and not worry about the demands of 0.000001% who want to accumulate yet more wealth at 0 benefit to society, unlike ham operators who can help during emergencies and natural disastors when all other means are down.


The point I'm trying to make is that hobbyists vs. .0001%ers is a false framing because less restrictive rules would benefit a much wider set of users than just HFTs. If you compare the degree of open-source project activity between ISM and HAM bands the difference is night and day. Stuffing all that vibrancy into a few narrow bands while HAM activity above HF is practically dead is the opposite of a public good.

Re disaster support: complete and utter LARP to an embarrassing degree. I've been familiar with a variety of clubs all my life through participating family members and this simply doesn't happen in a way that has any practical benefit to anyone. Again, opening it up would improve disaster comms quality by incentivizing development of better protocols and by getting access to those methods out of the hands of the unqualified gatekeepers of clubs.


Reading this headline has hit me unexpectedly hard :( Black banner please.


> The mostly unpredictable and extremely low duty cycle modes of non-line of sight VHF/UHF propagation like tropospheric ducting are not a realistic option for communications networks. Alternately the more reliable tropospheric scatter is a brute force solution requiring high output power of a kilowatt or more with big horn antennas.

A friend and myself are currently prototyping a solution where we transmit data using the near vertical incidence skywave (NVIS). Possibly the only option, when you want to avoid infrastructure at all costs. Of course you have the disadvantage of huge HF-antennas. To make this setup usable at all, we are trying with 20m long copper cables close to the ground. If this does not work we will try magnetic loop antennas.


In case you have something to share even if on documentation level, I'd invite you to join https://github.com/radio3-network so more people can help.

Over there we are building an operating system for ESP32 which then controls the LoRa module. Some boards are already coming with LoRa built-in and is possible to talk with chinese manufacturers to customize them.


Not only will the antennas have to be huge (or really inefficient like electrically small mag loops which throw away ~20dB of signal versus a resonant size antenna) but the legal limits on channel bandwidth and baud rate kick in.

You cannot legally do high rate networks on HF NVIS.


It's true, this will certainly not be a high speed network as 2.7 kHz is the max allowed bandwidth (compared 500 kHz for LoRa). But it should be fast enough for transmitting text messages.

Antenna size is problematic, but NVIS does not require the antenna to be high up in the air. Also the polarisation must not be vertical, so throwing a simple wire dipole on the ground, might actually do the job. But we will see, it's an experiment.


This might be interesting for you: https://www.kk5jy.net/LoG/


> Humans "hallucinate" in the AI sense (it's an awful word that obscures how often we do it) all the time too.

Agreed. I'd like to add another point to the discussion. It seems to me, as if LLMs are held to a higher standard regarding telling the truth than humans are. In my opinion the reason for this is that computers have been traditionally used to solve deterministic tasks and that people are not used to them making wrong claims.


LLMs are also probed a lot more for the limits of their knowledge. Consider the thousands of hours of peoples time that have gone into poking at the limits of the understanding of ChatGPT alone.

Imagine subjecting a random human to the same battery of conversations and judging the truthfulness of their answers.

Now, imagine doing the same to a child too young to have had many years of reinforcement of the social consequences of not clearly distinguishing fantasy from perceived truth.

I do think a human adult would (still) be likely to be overall better at distinguishing truth from fiction when replying, but I'm not at all confident that a human child would.

I think LLMs will need more reinforcement from probing the limits of their knowledge to make it easier to rely on their responses, but I also think one of the reasons people hold LLMs to the standard they do is also that they "sound" knowledgeable. If ChatGPT spoke like a 7 year old, nobody would take issue with it making a lot of stuff up. But since ChatGPT is more eloquent than most adults, it's easy to expect it to behave like a human adult. LLMs have gaps that are confusing to us because the signs we tend to go by to judge someones intelligence are not reliable with LLMs.


> It seems to me, as if LLMs are held to a higher standard regarding telling the truth than humans are.

Paradoxically, it seems as if the people who are pushing this the hardest are the same people who flat out deny even the slightest flicker of what could be considered intelligence.


> I don't understand the "coincidentally" argument.

Nothing is coincidental about those models. They were designed after processes in the brain. They underwent rigorous training to generate a function that probabilistically maps inputs to outputs. Eventually, it exceeded the threshold where most humans consider it to be intelligent. As these models grow larger, they will surpass human intelligence by far. Currently, large language models (LLMs) have fewer weights than human brains, with a difference of a factor in the thousands (based on my superficial research). But what happens when they have an equal or even 100,000 times more weights? These models will be able to model reality in ways humans cannot. Complex concepts like the connection between time and space, which are difficult for humans to grasp, will be easily understood by such models.

> LLMs do not hallucinate sometimes. They hallucinate all the time, it just is a coincident that sometimes these autocompletion of Tokens aligns with the reality. Just by chance, not by craft.

That is such a weird way to think about them. I'd rather say, they always provide the answer that is most probabilistic according to their internal model. Hallucination simply means, that the internal model is not good enough yet and needs to be improved, which it will.


Heh, another one I see

"LLMs don't create anything new" and "LLMs hallucinate all the time"

I want to ask those people which one is the correct sentence as they appear to be in conflict with each other.


> because they have not been engineered

The fundamental concept behind LLMs is to allow the model to autonomously deduce concepts, rather than explicitly engineering solutions into the system.


The fundamental concept is to learn the statistics of text, and in the process it models the syntax via long-range connections successfully. There is no indication that it actively generates "concepts" or that it knows what concepts are. In fact the model is not self-reflective at all, it cannot observe its own activation or tell me anything about it.


There is an indication, you can find it by clicking on this post.

The self-reflection part is probably true, but that’s not strictly necessary to understand concepts


it is important in order for us to accept it as an agent that understands things, because self-reflection is so important and obvious to us.


I'm still waiting for someone to prove beyond a shadow of doubt that humans have a single one of these features we're debating about the presence or absence of in LLMs.


there is no way to prove because those are subjective to humans. LLMs would have to at least show they have a subjective view (currently the 'internal world' they report is inconsistent)


The internal worlds of humans are inconsistent: https://en.wikipedia.org/wiki/Shadow_(psychology)


> learn the statistics

> what concepts are

How do you know concepts aren’t just statistics?


"concept" is ill-defined , it s a subjective thing that humans invented. It is probably not possible to define it without a sense (a definition) of self.


This paper highlights a crucial aspect of evaluating AI language models: the significance of prompt construction (e.g. adding "think step by step").

When a model is given insufficient context beyond the question, it may generate responses based on its best guess. This situation can be compared to abruptly waking someone up in the middle of the night and demanding an immediate response to a question.

In contrast, when humans are asked to answer questions in a test setting, they are aware of the larger context and the importance of providing accurate answers.


> Warning - AI not welcome here

> Because of my own personal philosophy regarding technology and AI, all the code in this repository that was written by me - I wrote 100% on my own. There is and will be no usage of Github Co-Pilot or any other AI tool. I became a software developer because of my passion for our craft - Software Engineering. I build this tool because I enjoy programming. Every single line of code you'll read in this repo, that was written by me, is produced first in my mind and then manifested into reality through my hands. I encourage any contributor to follow the same principle, though I can't and don't want to put any restrictions on this. Just like people stopped walking because they commute by cars and trains, which caused an increase in obesity and illness, I believe that the massive usage of AI will cause people to stop thinking and using their minds and the resulting havoc is unthinkable.

AI Gatekeeping... What a time to be alive!


> Why not design UIs in an editor that saves layout to XML?

Not sure if you are being sarcastic or not, but nothing is more 90s than XML based layout descriptions. Mozilla XUL (recently retired) comes to mind.


XUL needs to be resurrected (reimplemented)


> He broke in to an MIT networking closet (he was never a student there) and connected his equipment to the network.

The closet was unlocked and he used a regular guest access to the MIT network. Also he was downloading documents that were created by using public funds.

> There are a lot of much more legal ways to make the Internet freer. He was a smart guy and knew what he was doing was highly illegal.

There are always other and more effective ways to everything. With this kind of argumentation one always must come to the conclusion that it is best to do nothing. Also let's not forget that he did much more than downloading documents at MIT.

> think that this is the major issue with martyrdom. Aaron is remembered for "fighting the man" but the real story is a significantly muddier than that. A martyr's death makes it seem like the martyr did nothing wrong even if they did.

That's a definition for martyrdom I have not heard before. Usually a martyr is simply defined as a person who is willing to suffer or even die for a cause, belief, or principle that they consider to be of great importance.

> Sorry, I know this is kind of a dumb and not so productive soap box. Oh well.

I will simply never understand why people will argument so strongly against their self interests.


I’m not really talking about the definition of martyrdom, I’m talking about the effects of martyrdom on the general public.

E.g., what would Christianity become had Jesus not died on the cross? The central motivation of the Christian faith is that Christ died for our sins. It wouldn’t be so impactful if Jesus died of old age like everyone else.

I’m basically saying that being a martyr is something that amplifies a person’s image, and that’s the reason why Aaron came up in the first place.

If he took the six month plea deal and was alive today, he would not be part of this discussion.


I'd agree that people aren't invoking him because they care about him personally, people seldom give a darn when someone else isn't being given full credit. But I think it doesn't have anything to do with martyrdom: People were complaining about the reddit thing wrt Aaron while he was alive too.

The comments are driven out of concern and feelings of loss related to reddit's former perceived public-spirited democratic spirit in favor of corporate interests. It's only natural that people would highlight an early participant who seemed more aligned with their perspective and who seems to have been diminished in the modern narrative.

Does Aaron's separation with reddit explain its cultural changes? Things are seldom that simple. But when talking on a forum about our concerns with how reddit has changed over the years a simple view is perfectly appropriate-- so for some people bringing up the missing co-founder, is a suitable way to express their views.


Perhaps you’re right…which brings up another point: all these people building idealist products need to stop selling out to investors and acquiring companies.

The recent news about Imgur violating its original purpose of being the anti-photobucket image host for Reddit is the same thing, and even worse: Imgur was bootstrapped.

A founder can’t be said to have an idealist perspective if they sell their idealist platform to the highest bidder.

Jack Dorsey also comes to mind.


hah. For Dorsey the point that happened was when he made it public, I believe he's said he regretted that! :)

But I think the invocation of imgur brings up a good point. Was what we believed imgur to be ever actually economically viable? They were "bootstrapped" but $40 million in 2013, back before almost all of their impact.

At least some of the funded things that 'sell out' were just never viable to begin with.

It's far from clear to me that reddit couldn't have become more like Wikipedia-- driven by its community and funded through public support-- but there are lots of things that we could decry for violating their purpose that I think couldn't exist economically in their original more public-spirited form.

I find myself wondering more how often alternatives that are viable but just a little less good are driven out of the market or prevented from ever being created by funded alternatives which aren't viable... leaving us stuck on a bait and switch tread-mill while the services we actually need die for lack of support.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: