Can someone help me understand what is the advantage of using jsonnet, cue, or something else vs a simple python script (or dialect, like starlark), when you have the need of dynamically creating some sort of config?
I've used jsonnet in the past to create k8s files, but I don't work in that space anymore. I don't remember it being better or easier than writing a python script that outputs JSON. Not even taking into account maintainability and such. Maybe I'm missing something?
To add to the sibling comments, after going from a jsonnet-based setup to a Typescript-based one (via pulumi), the biggest thing I missed from jsonnet was the native object merge operations which are very useful for this kind of work as it lets you say "I want one of these, but with these changes" even when the objects are highly nested, and you can specify whether to merge or override for each individual key.
But ultimately this was a minor issue and I think it's far more important that you use something like this (whether a DSL or a mainstream PL) and that you're not trying to do string templating of YAML.
They're various points along the Turing complete config generator vs declarative config spectrum. Declarative config is ideal in lots of ways for mission critical things, but hard to create lots of because of boiler plate.
A turing-complete general purpose language is entirely unconstrained in its ability to generate config, so it's difficult to understand all the possible configs it can generate. And it's difficult to write policy that forbids certain kinds of config to be generated by something like Python. And when you need to do an emergency-rollback, it can be hard to debug a Python script that generates your config.
Starlark is a little better because it's deliberately constrained not to be as powerful as Python.
Jsonnet is, IIUC, basically an open source version of the borgcfg tool they've had at Google forever. My recollection is that Borgcfg had the reputation of being an unreadable nightmare that nobody understood. In practice, of course, people did understand it but I don't think anyone loved working with it.
I definitely wouldn't use Python because it isn't sandboxed, and users will end up doing crazy things like network calls in your config.
Starlark is a good option though.
People will talk about Jsonnet not being Turing complete, but IMO that is completely irrelvant. Turing completeness has zero practical significance for configs.
I think I would enjoy something like this (or pinboard), but with comments.
Sort of a smaller version of Hackernews, where I could see what my friends are bookmarking and then write comments on those links, so we could chat about the content not necessarily with random people of the internet but with your smaller community (or maybe with everyone too, but in a different section).
Maybe someone here knows if anything like this exists already? I've taken a look at some of the options out there, but didn't end up trying them because it seemed like they didn't do anything like this, and were more focused on simply storing bookmarks (and maybe sharing them with upvotes, but nothing for conversation).
Hey, that's exactly what I've been building with lynkmi.com! The main idea is social link sharing, curation, and discussion in smaller circles. Have plenty more features coming down the line, including one I'm really excited about for bridging the in-person/online divide, but the general direction is described in lynkmi.com/about
Would love to give you an invite! I think my email is in my profile, if not my DMs on Twitter are open (@TheOisinMoran).
Interesting example. Reddit definitely started that way, but over time people went to it for discussion first. So the links actually end up getting ignored if the title of the Reddit post is enough for people to start posting comments right away.
Funny how they say: "Creators can now also opt their images out from training of our future image generation models." and then the link is just a form to submit a single image at a time.
They mention you can disallow GPTBot on your site, sure, but even if you do, what happens if the Bot already scraped your image? In any case, probably other people would just publish your picture in some other website that does not disallow GPTBot anyway.
> Instead, the reason people go to customer service is because of a question that’s so specific, or complicated, or gnarly in some respect, that there’s no way the app will have the answer: you need a human.
I can see the author probably hasn't done much customer service? I'd say maybe 1% of customer queries go in that direction.
I agree dealing with a bot in that situation sucks, but a big chunk of customer questions can probably be answered without human intervation by LLMs, assuming they have enough data about the organization and its products.
You've nailed it. I've been a customer support specialist in tech startups for the last many years, and the HUGE and overwhelming majority of questions are about things that have been covered in detail in the support documentation by any even half-competent team. People do not normally search for their own answers. More than 90% of tickets are well-answered (according to the user's satisfaction score!) by a pre-written response explaining how an article in the support center addresses their specific question, and then linking to it.
This is why large companies end up hiring thousands of low-wage workers who are prohibited by their software from sending anything but a pre-written response. The first-level agents can't even edit it. They can only send you something someone else wrote for the common situation.
Chat bots could easily replace these workers, and the customer experience would be better for it.
You will (probably) always still need the higher level agents who are really tech support, the ones who can identify when someone's report really is a bug and get it reported to engineers for fixing.
Those aren't the low-level agents in these outsourced call centers, though. Even if those people do recognize an issue, they can't do anything until they've sent you some number of generic responses that haven't satisfied you.
At least the bot could be trained to customize responses within a set of parameters and match the tone of the asker.
When I use these chatbots or contact customer support, it's probably because I need a very basic and simple thing done.
Not because I don't want to do it myself. Because the website I'm using is throwing error messages, the form to do it myself has disappeared five years ago, and the only way to accomplish what I want is to bother someone over the phone.
When I worked customer service, I'm sure customers could've done most of the things they wanted to do themselves, if the website wasn't unclear, the help articles weren't incomplete and outdated, the manuals were up to date and the necessary buttons were accessible to the end user.
Companies sabotage themselves with shitty business practices so their cheap support lines get overwhelmed. Almost everything I've wanted to get done through a chatbot should've been an HTML form in my account panel, but businesses don't want to make it easy to return things or ask for refunds. They want you to jump through as many hoops and redirections as possible, because that makes them money.
With the way ChatGPT just lies and deceives out of the box, I've started taking screenshots of chatbot conversations. These businesses are making these chatbots their official point of contact so I take everything these bots promise to do or claim to have done as an official statement from their support department.
> I agree dealing with a bot in that situation sucks, but a big chunk of customer questions can probably be answered without human intervation by LLMs, assuming they have enough data about the organization and its products.
Hoenstly? I don't care about the company's problems giving support. If I have a problem, and the company won't resolve it, or if they waste too much of my time forcing me to go through a bot first, then I'm not their customer anymore. There are plenty of other places who actually appreciate my business.
Yeah, the author is assuming that the general populace is like themselves (and like most HN commenters, I suspect): fairly high degree of technical competence, able to find workarounds for most issues in an app or website or forum somewhere.
I've played both, and liked BOTW much more, although I also liked Horizon Zero Dawn. I also have the 2nd part of Horizon Zero Dawn (don't even remember the name) and tried to start it twice, but I guess I didn't get hooked into it and dropped it in favor of something else.
I guess they're two very different games. Even though both are "open world", I feel Zero Dawn is much more "on rails" than BOTW. I felt there's much more exploration and finding out by yourself on BOTW, while in Horizon is more about "go there, do that. Then go there, and do this other thing. etc".
I don't know the author and I'm not part of Google, but I've also been an interviewer both in startups and
bigtech, and I must say that I believe the process in bigtech is much better than what I see elsewhere. Every time I see posts about interviewing practices in bigtech I feel folks are completely missing the point on why things are made like this. No wonder they just default to bash the interview process.
Maybe it is understandable since the article does not clarify any of this, but folks should notice that:
- This type of interview is part of a process, composed of multiple interviews. Not the only one, and not the deciding factor. Questions on behaviors, team dynamics, etc, are also part of the process and covered in other interviews (usually done by managers). You could fail a systems interview and still pass the interview process. The specific interview mentioned in the article seems not too complex, though, so I wouldn't be surprised if it is a screening interview that rejects candidates when not passed.
- People mentioning that the interviewer is wrong on time or space complexity are also missing the point. It doesn't matter. If someone in the interview mentions O(n) instead of O(n2) and can reason that, that's already a good signal. It is not so much about being right or wrong. It is about thinking about those things when coding, keeping them in mind, been able to express that and run someone else through your thought process, etc. Someone plainly saying something is O(n) without any further explanation can give a bad signal, but someone explaining why she thinks so can be a good signal, even if ultimately the solution is not O(n).
- IMO, folks saying that this favors junior engineers and people fresh out of college are wrong. This favors people who ultimately understand the foundations of programming. People fresh out of college had a refresher on that not so long ago, so they've been practicing it more. The thing is, some people in our industry get further away from that as they add seniority to their career, but there's also lots of senior people that are very well versed on this kind of thing. Specifically, the question in this article is extremely simple, so TBH I don't understand why someone senior would fail to know the answer to that question just because that person is senior. Maybe these companies are just not looking for that kind of senior engineer for this specific positions. That's all. There's also a lot of different seniority levels, too. I could see this interview having less weight on a decision as seniority increases, but still important.
- "this does not represent a realistic scenario". That's a given. As a company, you have maybe 8 hours total with the candidate (multiple rounds of interviews plus the on-site). Onboarding into the company and a team, until you're productive, will take weeks/months. There's no way of testing a realistic scenario on an interview. We get the best signal we can with the time we're given.
- Not passing the interview process is very demoralizing, but it does not mean the company thinks you're a bad engineer or that you're not good enough. The process is designed to minimize false positives, so they tend to prefer to reject candidates if there's any question or mixed signal in the interviews.
- I'm not saying the process is perfect, by the way. For example, there's also luck involved, too. It shouldn't, but there is. To give one example: interviewers are people too. They need to be trained in the process of interviewing, what signals to look for, etc. There are good and bad interviewers. Interviewers can have a bad day and judge some answer as unsatisfying when some other day it would be fine, etc.
Surely there is some middle ground? Why everything has to be black or white and everyone needs to give silver bullet rules?
A service that stores something can be called "catalog" and it is descriptive, short and memorable. On the other hand, a service that does too many things can be called "zeus" or "cyclops" or whatever and that's okay too. It's difficult to have memorable and short names when things do too many things, or as mentioned in the post, they might change responsibilities.
Also, when you have something descriptive composed of multiple words (like "data-streaming-analyzer") people will certainly start using acronyms (DSA) and then you're back to short names than don't mean anything.
There's a world where everything is not called "pikachu", "cyclops", "potter" and "tortilla", nor "main-store-red-website", "download-analytics-store-and-processing", "sells-stream-processor". Use both. They are both useful!
I feel it's helpful to distinguish between "applications" and the "services" that make up those applications.
- application name: not descriptive unless you are 100% sure scope will not change over time (e.g. a specific report mandated by a regulator); exception to the rule: I actually like using initialisms because they start off descriptive but then over time people use the initialisms exclusively and its almost like you invented a non-descriptive word without the initial confusion
- service names: start with a monolith that is just the application name (or suffix "Core"), only split into other services once you have a good reason, and scope is clear, and then give it a descriptive name
Max Howell's tweet gets pasted on every article talking about code interviews as some kind of exemplification of the problem of whiteboard/algorithmic interviews.
It’s actually pretty easy if you’re comfortable with recursion and traversing trees. The tree questions Google asks in its interviews are usually much harder- it implies they were giving softball questions to him.
Knowing Homebrew could have given them reasons not to hire him. He made bad engineering decisions and brushed off feedback. And even he said he's a dick.
It's possible they told him but given that he admits he didn't know what a binary tree is it seems unlikely he did well at a Google coding interview, where I assume 90% of people know.
> But ultimately, should Google have hired me? Yes, absolutely yes. I am often a dick, I am often difficult, I often don’t know computer science, but. BUT. I make really good things, maybe they aren't perfect, but people really like them. Surely, surely Google could have used that.
Hmm so he fully admits that he made a bad package manager and is often a difficult dick. He sounds pretty arrogant too. I wouldn't have hired him.
And yes it is bad (try installing an old version of something). Just because it is popular doesn't mean it is good. It only ever had one competitor (ports) and that wasn't really Mac focused.
It's of no interest to Google that something bad you made happened to become popular. It's not like you can repeat that on demand.
Would you hire the person that wrote Bash? Or YAML?
MacPorts is Mac focused. People liked Homebrew because it didn't take lots of time compiling its own copies of libraries the OS had already. But MacPorts got binary packages. And Homebrew gave up using OS libraries.
Perhaps he felt he knew it to the extent that the problems posed weren't completely foreign to him and he felt his solutions were reasonable which was possibly corroborated through either subtle bodily cues of the interviewers or direct verbal confirmation.
In my experience good developers end up being good because they tend to do a lot of coding, read a lot on how to code correctly, and on how to improve their skills. Most people that I know don't do this because they need it, but because they are motivated to do it. So my question to you would be: do you like coding/developing? Are you up to spending time improving your skills?
You're saying you're not sure what to do about it, but from your post it sounds obvious to me that a next step would be to spend some time learning about CS, practicing coding/maintainability, and such?
I'd also add that not every developer needs to know every single aspect of CS, all the theory behind it, or even be an excellent coder. There are different areas in developing and you might find one that motivates you without needing a lot of any of the specific areas you mentioned.
For me it's been the opposite, actually. I use Mac, Windows 10 and Ubuntu for different use cases, and I keep finding Mac more appealing. I have to work around Ubuntu to make some stuff work sometimes. I really like Windows 10, but I have issues occasionally as well. With OS X, most of the stuff I try to do just work.
A few years back, I'd prefer to buy my own hardware, build my own PC and run Windows to save some cash and have a powerful PC. I would need to work a bit to make it work, sure, but I'd save some money. Now, I'd rather get an Mac, which is more expensive, but for which I have to worry about much fewer hardware and software issues.
I wonder if getting one of those Microsoft Surface Laptops would give me a similar experience with Windows 10, though.
I've used jsonnet in the past to create k8s files, but I don't work in that space anymore. I don't remember it being better or easier than writing a python script that outputs JSON. Not even taking into account maintainability and such. Maybe I'm missing something?