Hacker News new | past | comments | ask | show | jobs | submit | joshribakoff's comments login

I’ve seen more horrendous code using macros in elixir even despite by brief foray than I have seen ever in decades of working in languages with eval. Like using them when normal functions would suffice.

Using macros when a function would do is a legit anti-pattern (and documented as such [1]) but unrelated to the security aspect as they are compile-time constructs.

The reason they were added to the language was precisely so meta and dynamic programming is done at compile time, which you can introspect before you deploy, versus doing it at runtime, which is how most dynamic languages tackle this. And those languages are most likely not using eval either, but intrinsic features that allow you to define classes, attributes, methods, and so on programmatically.

I’d say eval is discouraged in most languages, although it is useful for building things like REPLs and interactive environments.

[1]: https://hexdocs.pm/elixir/macro-anti-patterns.html#unnecessa...


5 dozen lucerne eggs is $50 here in SF at Safeway, as of this morny. A few weeks to a month ago, I recall it being $5-$6 for one dozen.


The drop is very recent and refers to upstream prices for contracts, which might have not even been delivered to stores yet. I had been watching the contract prices and successfully predicted increases in my local prices based on it, although the increases tended to lag behind the contract pricing changes by a week or more. I had been expecting another increase when the drop occurred so it is unclear to me whether the drop prevented another increase or is something we will see passed to us in the next week or two.

That said, Safeway is an expensive store. Shop at Aldi. It is cheaper. Aldi is so much cheaper that you likely could have groceries delivered from it via Instacart and still save money.


Dont cancel the appointment, insist they repair it, do 3-4 attempts even if they close it as “expected characteristic” or “education”, then request a lemon law buy back for failure to honor the warranty. Check your purchase agreement for where to send the lemon law request. Demand incident compensation during the buyback if they failed to provide loaners.


I would encourage you to think more critically. If you organized a peaceful protest should you be held liable for other peoples actions? He was apparently deported for speech, not violence according to the parent comment.


Well, it depends. What was the protest against?


Restraining others from entering or leaving is no longer "peaceful".


By all means, show us the video of him doing this then. Or is he being detained for the actions of somebody else?


I have no idea if there's footage of him personally doing such things, but he's a leader of the protests, and one of the people negotiating on behalf of the other protesters with the university. He is most definitely taking part in occupying part of campus illegally, so his ass should be deported.


I am lemoning my second Tesla in a couple hours. Part of the criteria was that they have attempted to fix rattles and squeaks multiple times and the issues keep returning and they keep causing additional issues when they attempt to repair the original issue.

The vehicle keeps also falling apart. The wheel covers keep falling off because the clips keep breaking, even though there has been no damage that should have caused it.

On the previous one they were about seven or eight repair attempts for the window failing to roll up so I was unable to secure the vehicle and they kept failing to provide loaners when it was in for service.

This is for the model X and model S, which is supposed to be their “luxury” vehicles

By the way, if you would like to lemon your Tesla, please read your purchase agreement for the email to send your request. You get back all of the interest, payments, tax, registration and everything.


What does "lemoning" mean?


Likely Lemon Law.

Lemon laws are laws that provide a remedy for purchasers of cars and other consumer goods in order to compensate for products that repeatedly fail to meet standards of quality and performance. Although many types of products can be defective, the term "lemon" is mostly used to describe defective motor vehicles, such as cars, trucks, and motorcycles.

https://en.m.wikipedia.org/wiki/Lemon_law


After some search I've found that a 'lemon' is a slang term for a defective product.

https://www.wordsense.eu/lemoning/ It's number 5 here ^


It's more of a legal term of art than a slang term at this point. It's written into the statutes in some states, e.g. in Florida: http://www.leg.state.fl.us/Statutes/index.cfm?App_mode=Displ...



[flagged]


A comment replying to "Tesla is luxury" with "no it's not, mine keeps breaking" isn't unrelated. My 15-year-old Mercedes still looks brand new, luxury doesn't have a hard time keeping itself from falling apart.


How does interior quality and plastic firmness relate to a lemon? Does your car get totaled when there's imperfection in a cupholder? Are you people are out of your mind?


Man, please read the comment chain. "Tesla is premium" "actually it's shit, I got two bad ones" "completely unrelated post" "no it isn't".

At this point it feels like you're misunderstanding the chain on purpose.


Luxury and durability are mostly independent.


Math operators like division aren’t native to your (traditional) cpu, they are implemented in code, for example using similar algorithms as the ones taught to us in grade school to multiply big numbers one digit at a time on paper and “long division”.

All math operations can be implemented with bitwise operators, too, i am pretty sure

Likely the interviewer specifically needed the candidate to do that, implement the math, and tried to steer them that way numerous times (no sum table, dont use the type system, no math operators). Thats likely also why they suggest allowing limited use of Google, because they realize many people will need a refresher on bitwise operations. But they don’t want to outright tell you what to search for, they needed to see some resourcefulness. When they suggested OP was cheating they likely didn’t mean it personally and actually wanted to help steer OP towards an acceptable solution. Rather than saying it’s cheating they could have said it avoids the main thing we need to see, or outright say “please implement the low level math from first principles”

In my opinion the candidate showed resourcefulness in their own way indeed, but sometimes its not even up to the person administering an interview for example if they have been given a rubric.


Bitwise operators work on numbers. That's against the rules.

And while division can be implemented as repeated subtraction, you are not going to find any CPUs 4 bits and up that don't have an adder. It would be ridiculous to try to handle addition/subtraction in software.


> Math operators like division aren’t native to your (traditional) cpu, they are implemented in code

If you are talking about tiny microprocessors or old ARM chips, sure. But so are programming languages! They should have really asked him to code his solution in machine code then. After all, that's what you typically do in a NodeJS job :)


I just tried using golden layout the other day. The demo on their website with react support is v1 and when I installed it, I got v2 which apparently drops react support and has no documentation or examples on the website. With v1, it only supported class based components. While in theory you could maintain your own adapter logic for golden layout, it seems somewhat defunct, there are other newer libs that may even have better feeling drag and drop like this one. I also recall about six years ago when I went to use Golden layout on another project I ended up implementing my own with vanilla type script because of some (perceived) issues with it. My experiences have always been that although it is ubiquitous it’s not that great.


I have been using various LLMs to do some meal planning and recipe creation. I asked for summaries of the recipes and they looked good.

I then asked it to link a YouTube video for each recipe and it used the same video 10 times for all of the recipes. No amount of prompting was able to fix it unless I request one video at a time. It would just acknowledge the mistake, apologize and then repeat the same mistake again.

I told it let’s try something different and generate a shopping list of ingredients to cover all of the recipes, it recommended purchasing amounts that didn’t make sense and even added some random items that did not occur in any of the recipes

When I was making the dishes, I asked for the detailed recipes and it completely changed them, adding ingredients that were not on the shopping list. When I pointed it out it again, it acknowledged the mistake, apologized, and then “corrected it” by completely changing it again.

I would not conclude that I am a lazy or bad prompter, and I would not conclude that the LLMs exhibited any kind of remarkable reasoning ability. I even interrogated the AIs about why they were making the mistakes and they told me because “it just predicts the next word”.

Another example is, I asked the bots for tips on how to feel my pecs more on incline cable flies, it told me to start with the cables above shoulder height, which is not an incline fly, it is a decline fly. When I questioned it, it told me to start just below shoulder height, which again is not an incline fly.

My experience is that you have to write a draft of the note you were trying to create or leave so many details in the prompts that you are basically doing most of the work yourself. It’s great for things like give me a recipe that contains the following ingredients or clean up the following note to sound more professional. Anything more than that it tends to fail horribly for me. I have even had long conversations with the AIs asking them for tips on how to generate better prompts and it’s recommending things I’m already doing.

When people remark about the incredible reasoning ability, I wonder if they are just testing it on things that were already in the training data or they are failing to recognize how garbage the output can be. However, perhaps we can agree that the reasoning ability is incredible in the sense that it can do a lot of reasoning very quickly, but it completely lacks any kind of common sense and often does the wrong kind of reasoning.

For example, the prompt about tips to feel my pecs more on an incline cable fly could have just entailed “copy and pasting” a pre-written article from the training data; but instead in its own words, it “over analyzed bench angles and cable heights instead of addressing what you meant”. One of the bots did “copy paste” a generic article that included tips for decline flat and incline. None correctly gave tips for just incline on the first try, and some took several rounds of iteration basically spoon feeding the model the answer before it understood.


You're expecting it to be an 'oracle' that you prompt it with any question you can think of, and it answers correctly. I think your experiences will make more sense in the context of thinking of it as a heuristic model based situation simulation engine, as I described above.

For example, why would it have URLs to youtube videos of recipes? There is not enough storage in the model for that. The best it can realistically do is provide a properly formatted youtube URL. It would be nice if it could instead explain that it has no way to know that, but that answer isn't appropriate within the context of the training data and prompt you are giving it.

The other things you asked also require information it has no room to store, and would be impossibly difficult to essentially predict via model from underlying principles. That is something they can do in general- even much better than humans already in many cases- but is still a very error prone process akin to predicting the future.

For example, I am a competitive strength athlete, and I have a doctorate level training in human physiology and biomechanics. I could not reason out a method for you to feel your pecs better without seeing what you are already doing and coaching you in person, and experimenting with different ideas and techniques myself- also having access to my own actual human body to try movements and psychological cues on.

You are asking it to answer things that are nearly impossible to compute from first principles without unimaginable amounts of intelligence and compute power, and are unlikely to have been directly encoded in the model itself.

Now turning an already written set of recipes into a shopping list is something I would expect it to be able to do easily and correctly if you were using a modern model with a sufficiently sized context window, and prompting it correctly. I just did a quick text where I gave GPT 4o only the instruction steps (not ingredients list) for an oxtail soup recipe, and it accurately recreated the entire shopping list, organized realistically according to likely sections in the grocery store. What model were you using?


> an oxtail soup recipe

Sounds like the model just copy pasted one from the internet, hard to get that wrong. GP could have had a bespoke recipe and list of ingredients. This particular example of yours just reconfirmed what was being said: it's only able to copy-paste existing content, and it's lost otherwise.

In my case I have huge trouble making it create useful TypeScript code for example, simply because apparently there isn't sufficient advanced TS code that is described properly.

For completeness sake, my last prompt was to create a function that could infer one parameter type but not the other. After several prompts and loops, I learned that this is just not possible in TypeScript yet.


No, that example is not something that I would find very useful or a good example of its abilities- just one thing I generally expected it to be capable of doing. One can quickly confirm that it is doing the work and not copying and pasting the list by altering the recipe to include steps and ingredients not typical for such a recipe. I made a few such alterations just now, and reran it, and it adjusted correctly from a clean prompt.

I've found it able to come up with creative new ideas for solving scientific research problems, by finding similarities between concepts that I would not have thought of. I've also found it useful for suggesting local activities while I'm traveling based on my rather unusual interests that you wouldn't find recommended for travelers anywhere else. I've also found it can solve totally novel classical physics problems with correct qualitative answers that involve keeping track of the locations and interactions of a lot of objects.. I'm not sure how useful that is, but it proves real understanding and modeling - something people repeatedly say LLMs will never be capable of.

I have found that it can write okay code to solve totally novel problems, but not without a ton of iteration- which it can do, but is slower than me just doing it myself, and doesn't code in my style. I have not yet decided to use any code it writes, although it is interesting to test its abilities by presenting it with weird coding problems.

Overall, I would say it's actually not really very useful, but is actually exhibiting (very much alien and non-human like) real intelligence and understanding. It's just not an oracle- which is what people want and would find useful. I think we will find them more useful with having our own better understanding of what they actually are and can do, rather than what we wish they were.


Is the implication developer experience does not matter and user experience is the only thing that matters?

I can improve the user experience faster and more reliably if i can iterate faster.

While I wouldn’t agree with the sibling comment that says it needs to be measured in milliseconds, I will say that the faster it is, the more motivated I am to work on it as well.


No, but when you choose NextJS you're choosing a product and should evaluate the reasons for doing it. It's abstracting away a substantial amount of pain points in deploying and optimising react apps. The OP even talks about some of the things he's losing out on by dropping it. We spend so much time as developers moving on to the next thing, instead of shipping and worrying about a few mins of build time when deploying (which is nothing).


It's not only a few mins when deploying. Often you want to test the production build locally and it feels like waiting forever. Especially since clicking around in an app for the dev build is slow during cold use. I think it has gotten worse as I don't remember build times taking so long in next versions 5 thru 9 or so.


It doesn’t matter if you have five visitors or 5 million you presumably want your website to stay online so you need something that restarts the processes when they die, etc. Honestly building and publishing a docker image is super simple. There are numerous hosted kubernetes solutions to deploy it to as well. I don’t think that using next JS or using Kubernetes means you are inherently over complicating anything, and this is coming from someone who prefers minimal technology (I personally don’t use next JS, but I see the value in it for others)


Why is a process that is just serving static HTML regularly dying? NGINX, Apache etc will run for years unattended, they are designed to do this


If that were true, why do people keep designing and deploying orchestrators? In your opinion, because they’re stupid?


You don't need to deploy orchestrators to serve static html/js files.

> In your opinion, because they’re stupid?

If they are doing that to keep nginx running, that might be the case or they are super clever to do that for a higher salary at the cost of their employer.


Because devops and their service are way more dynamic than a simple static html file


Publishing a Docker image is simple. Managing a Kubernetes cluster, even hosted, may be much more complex, for no added benefit.

Throw your container on a VM, make systemd or even runit keep it running. It scales fine to half dozen boxes if needed. Same for your Postgres DB. For extra fancy points, keep a hot standby / read replica, and pick up any of the manual switch-over scripts.

Should keep you running until dozen million DAU, with half-day spent on the initial setup.


Can someone explain what is so hard about kubernetes?

You build an artifact in CI, you spend a few minutes writing your deployment yaml, and then you deploy it. There's no more work after that.

What is so freaking hard about kubernetes? Why is everyone losing their minds over it? It isn't rocket science and it doesn't take a lot of time.

The nightmare is maintaining flaky snowflake bare metal machines.


SRE here. First off, updating control plane/kubelet is nightmare in itself but let's assume you are running managed Kubernetes somewhere so that's taken care of.

Kubernetes out of the box is not ready to go. What Ingress are you going to use? Ingress-Nginx. Cool cool, How is that getting deployed? Helm Chart. How do we keep track of that being kept up to date and who deployed it? ArgoCD. So who is going to teach all CRDs for Argo and how they work with each other? SREs. You understand we dislike the devs and last thing we want to do is hold classes they don't want to learn? JUST BUILD A PLATFORM. And here we go.

So out of box, most people deploy Kubernetes + 8 "plugins" and it's Frankenstein monster that's you have to manage or it will decide to kill all the workloads one day.

EDIT: I'm didn't even discuss certificates for that ingress or all monitoring/logging this cluster will need to make sure it's properly operating.


It has a terrible first time user experience. Once you accept that developer experience means user experience for 19/20 programmers, it’s imperfect. For example even though programming is all about reading, the average programmer looks more like the average person than it ever was, and the average person hates reading. So Kubernetes bad. IMO this is why so much success has been found building SaaS on top of Kubernetes.


Because you have to have and manage a Kubernetes cluster, that is not something easy. Moreover scaling is not always automatic, you don't tell Kubernetes "scale this application" and it works, if the application is a monolith. You have to write it using a microservices architecture, something much more difficult.

Also, managing a CI pipeline isn't something very easy to start with, there are entire teams in companies that only do that.

In contrast running a simple server is much more simple, you install a Linux OS (probably Ubuntu server, or Debian), install a web stack (for example Apache, PHP, Mysql or Postgres these days), copy the files of your website in the root directory (/var/www), or if you are fancy pull a git repository on the server so you can update it with a simple "git pull", if you need more websites on the same machine configure virtualhosts (or use one of the many software that have a GUI to configure virtualhosts).

The second solution is much more simple, in fact is the most used solution as far as I see. Most websites doesn't need high availability, if the website of the bike repair shop that I have 100 meters down the street goes offline for a couple of minutes or even one day because I'm doing maintenance on the server really nobody notices it.


It’s not necessarily that it’s hard, but to be effective with kubernetes you need to understand a lot of infrastructure concepts like DNS, load balancing strategies, docker, storage drivers, service discovery, what exactly a pod is, what exactly a container is, etc.

It’s a lot of up front knowledge needed for marginal benefit at small to mid scales.

There’s a lot of steps if the “write your deployment yaml” step.


If you are small or mid scale and you need lots of compute between 8 5 to 8 on Monday till Friday it basically the best thing you can you to schedule your workload.

If you need some things like load balancing and zero downtime deploys etc you probably will build your own k8s which is often worse


Or you can deploy on app engine, lambda, firebase, etc.

Kubernetes is not the only game in town.

Sometimes shipping asap is more important. As with just about anything in tech, it’s about tradeoffs.

I may be biased, since I worked in devops before k8s came out, but building a decently scalable system architecture with load balancing and rolling deployments is pretty straightforward with monolithic systems. Especially since service discovery isn’t really a concern. Horizontal scaling works well in many many cases.

Realistically any app simple enough to be deployed by hand with a few docker containers will not be difficult to convert to k8s anyway.


You're doing the moral equivalent of that when you're setting up on bare metal.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: