Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1. GNU Name System to replace the DNS in a backwards-compatible manner, with delegation to cryptographic public keys (instead of IP addresses) and with strong guarantees against known attacks against the DNS (DNSSEC doesn't solve everything). https://gnunet.org/en/gns.html

2. Semantic sysadmin to declare your intent in regards to your infrastructure no matter how it is implemented (i.e. with a standard specification, interoperability/migration becomes possible) https://ttm.sh/dVy.md

3. GUI/WebUI CMS for contributing to a shared versioned repository. Sort of what netlify is doing, but using a standard so you can use the client of your choice and we tech folks can hold onto our CLI while our less-techie friends can enjoy a great UI/UX for publishing articles to collective websites.

4. Structured shell for the masses. Powershell isn't the worst, but in my view nushell has a bright future ahead. For the people who don't need portability, it may well entirely replace bash, Python and perl for writing more maintainable and user-friendly shell scripts. https://nushell.sh/

5. A desktop environment toolkit that focuses on empowering to build more opinionated desktops while mutualizing the burden of maintenance of core components. Most desktop environments should have a common base/library (freedesktop?) where features/bugs can be dealt with once and for all and we don't have to reinvent the wheel every single time. Last week i learnt some DE folks want to fork the whole of GTK because it's becoming too opinionated for their usage, and GNOME is nowadays really bloated and buggy thanks to javascript hell. Can't we have a user-friendly desktop with solid foundations and customizability?




PM lead for PowerShell here, thanks for the callout! I'll take "isn't the worst". ;)

I'd love to get more of your thoughts around how PowerShell might be more useful for the kinds of scenarios you're thinking about. We see a lot of folks writing portable CI/CD build/test/deploy scripts for cross-platform apps (or to support cross-platform development), but we're always looking to lower the barrier of entry to get into PowerShell, as it can be quite jarring to someone who's used Bash their whole life (myself included).

Structured shells have so much potential outside of that, though. I find myself using PowerShell to "explore" REST APIs, and then it's easy to translate that into something scripted and portable. But I'd love to get to a place one day where we could treat arbitrary datasets like that, sort of like a generalized SQL/JSON/whatever REPL.

Plus, PS enables me to Google regex less :D


Stop shipping your org chart!

Microsoft has always had this problem, but with PowerShell -- which is supposed to be this unified interface to all things Microsoft -- it is glaringly obvious that teams at Microsoft do not talk to each other.

To this day, the ActiveDirectory commands throw exceptions instead of returning Errors. Are you not allowed to talk to them?

The Exchange "Set" commands, if failing to match the provided user name, helpfully overwrite the first 1,000 users instead because... admins don't need weekends, am I right? Who doesn't enjoy a disaster recovery instead of going to the beach?

I'm what you'd categorise as a power user of PS 5.1, having written many PS1 modules and several C# modules for customers to use at scale. I've barely touched PowerShell Core because support for it within Microsoft is more miss than hit.

For example, .NET Core has caused serious issues. PowerShell needs dynamic DLL loading to work, but .NET Core hasn't prioritised that, because web apps don't need it. The runtime introduced EXE-level flags that should have been DLL-level, making certain categories of PowerShell modules impossible to develop. I gave up. I no longer develop for PowerShell at all. It's just too hard.

It's nice that Out-GridView and Show-Command are back, but they launch under the shell window, which makes them hard to find at the best of times and very irritating when the shell is embedded (E.g.: in VS Code)

The Azure commandlets are generally a pain to work with, so I've switched to ARM Templates for most things because PowerShell resource provisioning scripts cannot be re-run, unlike scripts based on the "az" command line or templates. Graph is a monstrosity, and most of my customers are still using MSOnline and are firmly tied to PS 5.1 for the foreseeable future.

Heaven help you if you need to manage a full suite of Hybrid Office 365 backoffice applications. The connection time alone is a solid 2 minutes. Commands fail regularly due to network or throttling reasons, and scripts in general aren't retry-able as mentioned above. This is a usability disaster.

Last, but not least: Who thought it was a good idea to strip the help content out and force users to jump through hoops to install it? There ought to be a guild of programmers so people like him can be summarily ejected from it!


Thanks for the thoughtful response! Many of these are totally legitimate: in particular, we're making steady progress to centralize module design, release, documentation, and modernization, or at least to bring many teams closer together. In many cases, we're at a transition point between moving from traditional PS remoting modules and filling out PS coverage for newer OAuth / REST API flows.

I don't know how recently you've tried PS7, but the back-compat (particularly on Windows) is much, much better[1]. And for those places where compatibility isn't there yet, if you're running on Windows, you can just `Import-Module -UseWindowsPowerShell FooModule` and it'll secretly load out-of-proc in Windows PS.

Unfortunately, the .NET problems are outside my area. I'm definitely not the expert, but I believe many of the decisions around the default assembly load context are integral to the refactoring of .NET Core/5+. We are looking into building a generalized assembly load context that allows for "module isolation", and I'd love to get a sense in the issue tracking that[2] whether or not fixing that would help solve some of the difficulties you're having in building modules.

For Azure, you should check out the PSArm[3] module that we just started shipping experimentally. It's basically a PS DSL around ARM templates, as someone who uses PS and writes the Azure JSON, you sound like the ideal target for it.

As for the help content, that's a very funny story for another time :D

[1]: https://aka.ms/psmodulecompat

[2]: https://github.com/PowerShell/PowerShell/issues/2083

[3]: https://github.com/powershell/psarm


It looks like the main problem people have with PowerShell is slow startup. You should probably work on making it snappy as main priority.

As far as module problems are in question, this is IMO not really fair - you can't expect that every team there have the same standards regarding how modules should work, no matter if the team is from Microsoft or not. The best you could do is perhaps form a consulting / standards enforcing team for MS grown modules.

I love PowerShell, its really poster child for how projects should be done on GH.

And I agree with you about REST API - I never use anything else to explore it (including postman and friends) - I am simply more productive in pwsh. We love it in company so much that we always create powershell REST API client for our services by hand (although some generators are available) in order to be in spirit of the language; all automatic tests are done with it, using awesome Pester 5.

Thanks for all the great work. PowerShell makes what I do joy to that point that I am always in it.


> we're always looking to lower the barrier of entry to get into PowerShell

I’ve used powershell regularly since way back when (it was still called monad when I first tied it).

I’m extremely comfortable in the Windows environment but even yesterday I found it easiest to shell out to cmd.exe to pipe the output of git fast-export to stop powershell from messing with stdout (line feeds)

I really like the idea of a pipeline that can pass more than text streams but it absolutely has to be zero friction to pipe the output of jq, git (and awk, sed etc for oldies like me) without breaking things.


We've fixed a ton of these in PowerShell 7 (pwsh.exe, as opposed to Windows PowerShell / powershell.exe), particularly because we needed to support Linux and more of its semantics.

If you're seeing issues within PowerShell 7, please file issues against us at github.com/powershell/powershell


The inability to handle simple text is my #1 annoyance. For the rest of them, see jiggawatts's comment


In case you haven't seen it already, I found https://news.ycombinator.com/item?id=26779580 to be a pretty succinct list of the biggest stumbling points (latency, telemetry and documentation).

A couple of more specific points I'd like to add after experience writing non-trivial PS scripts:

- Tooling is still spotty. Last I used the VS Code extension, it was flaky and provided little in the way of formatting, autocomplete or linting. AIUI PowerShell scripts should be easier to statically analyze than an average bash script, so something as rigorous as ShellCheck would be nice to have too.

- Docs around .NET interop still appear to be few and far between. I recall having to do quite a bit of guesswork around type conversions, calling conventions and the like.

It's nice to see the docs have had a major overhaul since I last dug into them though :)


Tooling?? I write powershell scripts in notepad.


Notepad? Amateurs and your IDEs. REAL developers work without single-level undo, conveniences of vi or emacs.

Real developers use `edlin`.


I thought we just used a magnetized needle with a steady hand?


The real pros use butterflies.


> we're always looking to lower the barrier of entry to get into PowerShell, as it can be quite jarring to someone who's used Bash their whole life (myself included).

apt search powershell returns no meaningful result on Debian unstable. I think that's a big barrier to entry, at least for me and people who deploy using docker images based on Debian and Ubuntu.


Good to know! I've generally understood that the bar for package inclusion for both Debian and Ubuntu is fairly high (where Debian wants you to push to them and Ubuntu will pull from you).

Our setup today is simply to add an apt repo[1] (of which there is just one domain for all Microsoft Linux packages), and then you can `apt install`.

We also ship pretty minimal Ubuntu and Debian (and Alpine and a bunch of other) container images here.[2]

Oh, and we ship on the Snap Store if you're using Snapcraft stuff.

[1]: http://aka.ms/install-pslinux

[2]: https://hub.docker.com/_/microsoft-powershell


Don't return everything, return what I specifically returned (yeah, I know about objects, talking about everywhere else). I know it will never happen, but one can dream. Painpoints aside, you and your team are doing excellent job. Thank you

Edit: unless you are also responsible for DSC, than I'll take it back. It's terrible.


Unfortunately, we can't ever change that one, or the whole world of existing stuff will break.

It's intended as a shell semantic where anything bare on the command line just gets run, no matter your scope.

However when we introduced classes, we thought it was a more "dev-oriented" semantic, so we changed return there.

This will only return 'this will return':

  class foo {
    [string] ReturnTest() {
      'this will not return'
      return 'this will return'
    }
  }
  
  ([foo]::new()).ReturnTest()


Please bump priority of https://github.com/PowerShell/PowerShell/issues/3415 it makes it ugly and hard to convert scripts. Also source of bugs when users add lines to scripts.


The behaviour with UTF8 is still so strange to me. I get random behaviour during piping commands because utf8 still isn't the default for everything.


> I'll take "isn't the worst". ;)

You should! It was definitely a compliment.

> I'd love to get more of your thoughts

On a technical level, i would say PowerShell is a breakthrough because it democratized the concept of structured data REPL as a shell. This pattern was well-known to GNU (and other LISP) hackers but not very popular otherwise, so thank you very much for that. Despite that, having telemetry in a shell is a serious problem in my view. That, and other technical criticisms others have mentioned (see previous HN discussions about PowerShell) is why i don't use PowerShell more.

On a more meta level, i'd say the biggest missing feature of the software is self-organization (or democracy if you'd rather call it that). The idea is great but the realization is far from perfect. Like most products pushed by a company, PowerShell is being developed by a team who has their own agenda/way and does not take time/energy to gather community feedback on language design. I believe no single group of humans can figure out the best solutions for everyone else, and that's why community involvement/criticism is important. For this reason, despite being much more modest in the current implementation, i believe NuShell being the child of many minds has more potential to evolve into a more consistent and user-friendly design in the future.

Beyond that, i have a strong political criticism of Microsoft as part of the military industrial complex, as a long-standing enemy to free-software (still no Github or Microsoft XP source code in sight despite all the ongoing openwashing) and user-controled hardware (remember when MS tried to push for SecureBoot to not be removable in BIOS settings?), as an unfair commercial actor abusing its monopoly (forced sale of Windows with computers is NOT ok, and is by law illegal in many countries) and more generally as one among many corporations in this capitalist nightmare profiting from the misery of others and contributing its fair share the destruction of our environment.

This is not a personal criticism (i don't even know you yet! :)) so please don't take it personally. We all make questionable ethical choices at some point in life to make a living (myself included), and i'm no judge of any kind (i'll let you be your own judge if you let me be mine). In my personal reflection about my own life, I found some really good points in this talk by Nabil Hassein called "Computing, Climate Change, and All our Relationships", about the human/political consequences of our trade as global-north technologists. I strongly recommend anyone to watch it: https://nabilhassein.github.io/blog/computing-climate-change...

> how PowerShell might be more useful for the kinds of scenarios you're thinking about

I don't think i've seen any form of doctests in PowerShell. I think that would be a great addition for many people. A test suite in separate files is fine when you're cloning a repo, but scripts are great precisely because they're single files that can be passed around as needed.

> Structured shells have so much potential outside of that, though.

Indeed! If they're portable enough, have some notion of permissions/capabilities and have a good type system they'd make good candidates as scripting languages to embed in other applications because these applications usually expose structured data and some form of DSL, so having a whole shell ecosystem to develop/debug scripts would be amazing.

I sometimes wonder what a modern, lightweight and consistent Excel/PowerShell frankensteinish child would look like. Both tools are excellent for less experienced users and very functional from a language perspective. From a spreadsheet perspective, a structured shell would for example enable better integration with other data sources (at a cost of security/reproducibility but the tradeoff is worthwhile in many cases i think). From a structured shell perspective, having spreadsheet features to lay data around (for later reuse, instead of linear command history) and graph it easily would be priceless.

> I'd love to get to a place one day where we could treat arbitrary datasets like that, sort of like a generalized SQL/JSON/whatever REPL.

Well that's precisely what nushell's "from" command is doing, supporting CSV, JSON, YAML, and many more! https://www.nushell.sh/book/command_reference.html no SQL there yet ;-)

PS: I wish you the best and hope you can find some time to reflect on your role/status in this world. And i hope i don't sound too condescending, because if you'd asked me yesterday what i would tell a microsoft higher-up given the occasion, it would have been full of expletives :o... so here's me trying to be friendly and constructive as much as possible, hoping we can build a better future for the next generation. Long live the Commune (150th birthday this year)!


I read all of it, as well as some more of your writings that I found, and I very much appreciate your thoughtfulness. I don't agree with everything you've said here, but you raise some very good points. Thanks, friend. :)


In case the nushell link doesn't work for anyone else, consider prefixing it with www: https://www.nushell.sh/

Admittedly, i have no idea why we even need to do that nowadays, but that seemed to work.


It's because the bare TLD isn't setup to accept requests, but the `www` subdomain is (it's DNSed to a different IP):

  $ curl -v --head https://nushell.sh/
  *   Trying 162.255.119.254...
  * TCP_NODELAY set
  * Connection failed
  * connect to 162.255.119.254 port 443 failed: Connection refused

  $ curl -v --head https://www.nushell.sh/
  *   Trying 185.199.108.153...
  * TCP_NODELAY set
  * Connected to www.nushell.sh (185.199.108.153) port 443 (#0)
Most hosts will alias or redirect away the www subdomain, but that's just a convenience. Of course technically foo.com and www.foo.com can have different DNS entries.


My first IT gig, 25 (sigh) years ago, I tried to set up <ourdomain> as an alias for www.<ourdomain>. Seemed to work ok, but somehow I noticed that I had broken email delivery through our firewall, so I reverted the change. Couldn't figure out exactly what was going on, and set it aside.

A few months after I left, I sent an email to a friend who still worked there, and it bounced exactly the same way. Called up my friend in a hurry, and sure enough, they had just finished deploying the same change.


Why even have www.<ourdomain>.<tld> in the first place then, if <ourdomain>.<tld> is entirely sufficient on its own?

It does appear that it's mostly done for historical reasons and sometimes you need CNAME records[1], but overall it feels like it probably introduces unnecessarily complexity, because the www. prefix doesn't really seem to be all that useful apart from the mentioned situation with CNAMEs.

That's kind of why i asked the question above - maybe someone can comment on additional use cases or reasons for sticking to the www convention, which aren't covered in the linked page.

When i last asked the question to a company who only had their website available with wwww but didn't without, i got an unsatisfactory and non-specific answer where spam was mentioned. I'm not sure whether there's any truth to that.

[1] https://en.wikipedia.org/wiki/World_Wide_Web#WWW_prefix


It depends on the setup. Some cloud load balancers like AWS ELB require a CNAME, which DNS (RFC 1912) doesn't allow other records at that level if it has a CNAME.

So, can't put a CNAME on the apex, which probably also has MX records. I think in some cases like Exchange, if it sees a CNAME, it doesn't bother with looking at the MX.

Back in the day "CNAME flattening" or aliases (RFC draft draft-ietf-dnsop-aname) wasn't a common thing, so only real way was to redirect the domain apex to www, and then use a CNAME on the www. You'd probably need a separate service/servers to handle that redirect (at least DNS round robin would work in this case). So yea extra complexity in that case, due to the requirements. Or, give them DNS authority (eg, AWS Route 53).

Then there's the whole era of TV/radio commercials telling people "www dot <name>" that a lot of people type it anyways. You can redirect www to apex, which some sites do for a "clean brand" but now browsers are dropping the www in the UI anyways.

I've also run into plenty situations where www worked but apex didn't. Relatedly, it's a little surprising that browsers didn't default to assuming typing the apex in the browser it would try www first. And recently, now we're getting SVCB and HTTPS DNS RRs along with A/AAAA (and maybe ANAME). Indeed lots of complexity.


While there are plenty of domain which only exist to serve a website, quite a few others have more than that.

With a website, if you want to push it onto a Content Delivery Network (CDN) it is easy to change www.example.com to point (via a CNAME record) to the right place.

If, however, you want to do that with the just example.com and also want to run things like mail, you can not use the CNAME record.

The why is long and boring, but that is the situation right now.


Is it long and boring? I thought it was just if you declare a CNAME, you can't declare any other types. Full stop.


> 3. GUI/WebUI CMS for contributing to a shared versioned repository.

Did you have some specific tool in mind? Because I completely agree that this is a great way of working with content. We have been doing that for a couple of months with our own tool. It uses Git and stores content in a pretty-printed JSON file. Techies can update that directly and push it manually. Content editors can use our tool to edit and commit to Git with a simple Web UI. Would that go into a direction you were thinking of?


The closest i can think of is NetlifyCMS, but it's terrible because it's a bunch of JavaScript both server side and client side, and is really not intended to be selfhosted despite being open-source.

If NetlifyCMS was a robust library for abstracting over versioning system and static site generator to build WebUI/TUI/GUI clients with, that would fit what i have in mind. I don't know of any such program yet, please let me know if you find something :)


OK, I see. The tool we are working on is similar but also not quite what you are looking for. In case you want to have a look, it is availale on <my-name> .io


NetlifyCMS comes to mind!


I am a lifelong sysadmin, and have thought about #2 frequently. I am thinking seriously about making it a research project. Who is behind this document you linked to? Is it tildeverse? Parts of that document are pretty up to date, so it does not seem very old. I am not really familiar with the null pointer site, or even how to find things on it (as in, every now and then I am surprised). I am surprised I have not seen this before.


Hello, i'm the author of this document although it's just a draft for the moment (lots of TODOs in there) which is why it's not on my blog yet.

nullpointer is just a file upload system of which ttm.sh is an instance residing in the tildeverse. I sometimes use it to publish drafts to collect thoughts/feedbacks on ideas i have. I'm also part of the tildeverse. I reside on thunix.net and do sysadmin for fr.tild3.org. I'm often around on #thunix, #selfhosting (etc) on tilde.chat in case you're also around :)

> I am a lifelong sysadmin, and have thought about #2 frequently. I am thinking seriously about making it a research project.

I think a lot of us have been obsessed with this idea for a while, but nobody to my knowledge has done it yet. If you feel like exploring this idea, amazing! It is in my view a complex-yet-solvable problem that many projects have failed to deal with because they've been too focused on narrow use-cases and not on the broader conception of a standardized specification ecosystem for selfhosting. If you feel like exploring this idea collectively (for example by cooperating on a standard for which you would contribute a specific implementation), count me in. I think a lot of brilliant people will be glad to board the ship once it's sailing!

If you'd like to see where this idea has taken me so far, take a look at the joinjabber.org project. The entire infrastructure is described in a single, (hopefully) human-meaningful configuration file, with Ansible roles implementing it: https://codeberg.org/joinjabber/infra

Wish you the best, please keep me updated if you have more thoughts on this topic or would like to actively start a such project


Declarative is awesome for bootstrapping an infrastructure from nothing, but once it’s running and people are depending on it, it actually matters a lot which operations are done to it, in what order, and with what dispersion in time. Doubly so if it’s stateful. In the real world we see either hints in the “declarative” config to tweak execution order, or procedural workflows expressed ad hoc as a sequence of goal states. I think the future of sysadmin will be more explicitly procedural. Services managing services through the normal building blocks of services: endpoints, workers, databases, etc.


> I think the future of sysadmin will be more explicitly procedural.

I think that's true for the lower levels of abstraction because sysadmin is a finite list of steps. Being able to program your infrastructure in a type-safe, fail-safe way is very important. In the grandparent comment, i was arguing for building higher-level abstractions (semantic sysadmin) on top of that to make it easier to understand how your infrastructure is configured, and make it reproducible/forkable.

Think of it like there's two kinds of state stored on your system: useful state (eg user data) and infrastructure byproducts (eg TLS certificates). The former must be included in backups, the latter can be regenerated on the go from server config. The kind of declarativeness i'm interested in is that which enables any member/customer to fork the entire infrastructure by editing a single config file if they're not happy with provided services, and from there they can import their own useful state (eg a website/mailbox backup). Hypothetically, the scenario would be like "is Google Reader closing down? Let's just fork their infra, edit the config file, and import our our feeds and all we have to do is use a different URL to access the very same services".


> 4. Structured shell for the masses.

I wonder if shell stuff would work better in a notebook like environment.

Edit: At least one exists: https://shellnotebook.com/


There was a time I did all my shell commands just directly in Perl. If you have it all in your head you can do crazy stuff pretty quickly. Especially by having libraries that you can import and invoke in one line.


That's really nice. I never really got into perl but i was moved by some sysadmin friends back in the days playing crazy tricks right in front of my eyes... doing the same in bash would have taken me minutes or even hours, so i always felt like this was some kind of great wizardry.

But perl is not really the most user-friendly language to learn in my opinion. I think raku is much better in this regard, but unfortunately is not deployed widely (yet?).


> GNU Name System

Really? Can you elaborate a bit on the why? As far as I can tell, GNS has been around as a proposal for years and has gained no traction.


> Really? Can you elaborate a bit on the why?

It's the only serious proposal i've seen to replace the DNS protocol. It's backwards-compatible with existing zones (same RR types plus a few new ones like PKEY), with existing governance (the protocol has a notion of global root, which is very likely to be ICANN), and replacing traditional DNS recursive resolution with GNS recursive resolution is not such a challenge.. and in any case most devices don't even do recursive resolution but act as a stub for a recursive resolver further on the network (a somewhat-worrying trend).

So, the fact that GNS can live alongside the DNS for as long as people need is for me a very appealing argument. Also, GNS uses proven p2p schemes (DHT) in a clever way instead of reinventing the wheel or yet another crypto-ponzi-scheme. "crypto" in GNS equals cryptographic properties to ensure security/privacy of the system, not tech-startup cryptocoin bullshit, and i truly appreciate that.

Then, looking at the security properties of GNS, it looks great on paper. I'm no cryptographer so i can't vet the design of it, but ensuring query privacy and preventing zone enumeration at the same time sounds like a useful property to build a global decentralized database, which is what the DNS protocol is about despite having many problems.

Building on such properties, re:claimID is a clever proposal for decentralized identity. I'm very much not a fan of digital identity systems, but that one resembles something that would be respectful of its users as persons. There's FOSDEM/CCC talks about that if you'd like to know more.

> GNS has been around as a proposal for years and has gained no traction

I wouldn't exactly say that. I'm not involved in the GNUNet project so i don't know the specifics, but GNS was at least presented at an ICANN panel as an "emerging identifier", that is one of the possible future replacements for DNS. I'd consider being recognized by ICANN as a potential DNS replacement some "traction" (more than i expected). If you can find the video from that ICANN session where both GNS and Handshake are introduced, that'll probably do a better job than me explaining why GNS is very fitted to replace DNS. URL used to be https://icann.zoom.us/rec/play/tJErIuCs-mg3E4GXtgSDB_UqW464f... but now that's not loading for me anymore.


Thanks for the very comprehensive reply. It does sound like GNS is gaining more traction than I thought, and I hadn't heard of re:claimID, which sounds pretty interesting in its own right.


Serious question: Who is the target audience of Nushell? I know for a fact that I couldn't get someone like my parents or my girlfriend to use something like that day to day, and if you need a shell to work more efficiently, why not just learn bash? The learning curve for the 2 seems pretty much exactly the same, and bash has the benefit of being installed on almost every Linux/Unix system, even Macs (I think they actually use zsh now, but still)


> I know for a fact that I couldn't get someone (...) to use something like that day to day

I know for a fact the opposite is true for me. A simple shell syntax with an amazing documentation is all it takes for people to write useful scripts.

I'm confident i can teach basic programming to a total newbie using a structured shell in a few minutes. Explaining quirks of conditionals and loops in usual shells is trickier: should i use "-eq" or "=" or "=="? why am i iterating over a non-matching globset? etc.

> why not just learn bash?

I have a love-hate relationship with bash. It's good, but full of inconsistencies, and writing reliable, fail-safe scripts is really hard. I much prefer a language in which i'm less productive, but doesn't take me hours of debugging every time i'm reaching an edge case.

Also, bash has very little embedded tooling, compared to nushell. In many cases, you have to learn more tools (languages) like awk, jq. In nushell, such features are built-in.

> being installed on almost every Linux/Unix system

Well, bash is definitely very portable. But at this game, nothing can beat a standard POSIX /bin/sh. Who knows? It may outlive us all :)


> Who is the target audience of Nushell?

People who a) are trying to escape from the insanity of traditional shells and use something that works with structured data, and b) want something other than PowerShell.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: