Hacker News new | past | comments | ask | show | jobs | submit login
My preferred .NET console stack (devlead.se)
199 points by devlead on Jan 15, 2021 | hide | past | favorite | 110 comments



For F# implementations I suggest https://fsprojects.github.io/Argu/ as the command line argument handling library.


I liked Argu when writing .NET in the past. Seemed a lot more feature rich and easier to whip something up than the free alternate C# OSS ones at the time - and IMO looking at this for a typical CLI command app probably still is. Argu's advantage over this, at least it seems to me, is there's a lot less boilerplate (more defaults, less annotations, less classes/files). You could write a solid CLI app with subcommands, etc in a single script if you wanted to. Spectre's advantage seems to be if you want to create an interactive CLI application.


Spectre.Console looks like a really neat library. I am going to have to check this one out for some of my CLI utilities.

With the advent of Blazor, you could also look at building rich UX on top of the very same business services used in the console variant. I have found that Blazor is productive enough to consider using universally for management/configuration/admin dashboards for whatever business process. You could even have console interface and multiple Blazor apps operating side-by-side within the same logical process. Microsoft's DI & AspNetCore support this kind of thing out of the box.


Blazor is still a bit rough around the edges but I find it so promising. I really hope Microsoft whole heartedly embraces it for front end development.


Have you seen the Blazor work planned for .NET 6? Hot reload and desktop apps with Blazor UI are both priority-0 (ultra-high priority) epics, I'm really looking forward to it.

https://themesof.net/


Hot reload would probably cut my implementation time in half on this current project. Regardless, its still lightyears better than developing some javascript SPA and fighting all that extra boilerplate.


What is an epic?


It's a bigger unit than a "(user) story" in agile development.


Second this. Want to use blazor but worried they are going to pull the rug out from under us, siverlight anyone (smile)


Always a possibility but seems unlikely based on the efforts and cross-platform story that Blazor brings in.

Silverlight required add-ons for all platforms to be pre-installed. That was too similar to Flash and Silvelight became obsolete as soon as these plug-ins were not going to be supported on iPad and iPhones. There was no real point anymore.

At this stage, I see Blazor as a necessity for the .Net platform to move forward on being relevant for apps that require UI. There is no real official .Net cross-platform way of building complex user interfaces.

Blazor has the advantage of being a platform on which MS can build on to bring all types of apps together: they can be OS agnostic (.NET 5.0 already runs on Win, Mac and Linux) and target web, desktop and mobile in one swoop, although it will require a few iterations to have an ecosystem rich and stable enough to work well and consistently everywhere.


The biggest issue with it is a really large binaries that have to be downloaded on page start. So it isn't necessarily suitable for use cases where speed matters. But at the same time, it's perfect for dashboards, I really like to work with it.


Most large websites could do a little extra work on trimming their image size & be fine with the size of Blazor with WASM.


Server-side Blazor is as fast as any other server-side app, and can prerender the page before the JS adds the websocket connection for interactivity.

Client-side Blazor can also be prerendered now and has much better trimming so you can still see static HTML content instantly while the rest loads up in the background.


We use server-side Blazor for all of our interfaces. These load incredibly quickly. I personally don't like the concept of WASM and client-side Blazor. Especially, for the use cases where <100 people are going to be using the system at the same time.


I prefer referencing PowerShell and implementing commandlents. system.management.automation.pscmdlet

I don't see a whole lot of developers familiar with this package but it is great if you can see yourself using PowerShell or a batch file wrapped to invoke it. No boilerplate, no parsing, auto generated help and type safe validations.


Since you've looked into .NET 5.0 - is it possible now to create a single .exe, and not having to carry extra .dlls (I'm still stuck to .NET Framework 4.7.2 for one reason or another).


It's just a fancy self-extracting archive... it's not a statically linked binary with tree shaking of everything which is not actually used.


In .NET Core 3.1 it was a self-extracting archive; I think in .NET 5.0 the assemblies are directly mmapped from the .exe without an extraction step. Also, you can enable some tree shaking: https://docs.microsoft.com/en-us/dotnet/core/deploying/trimm... But tree shaking can be problematic if the application uses Reflection.


> But tree shaking can be problematic if the application uses Reflection.

Reflection is common.


ugh - I was expecting something better, then again while probably .NET bytecode maybe more mergeable (Without knowing much about it, assuming "java" like essence), there is still the issue of "pdb" merging (but that should be doable...). Obviously native is problematic.

Back at Google, I really like how this was solved. Bazel (blaze) internally when doing java or python tool - would actually compile the java runner to become the main app, then the .jar (combined) could be part of it (not sure right now, but it could be), and any native C/C++ JVM code would be part of the java runner (all statically linked) - so you end up with single executable.

Same for python - any C/C++ linked externally becomes part of the "python.exe" (sorry, I'm mostly on windows terms now) + the rest of the compiled .python code - may go as .zip with the .exe - or even directly embedded as some data.

I really hope Microsoft goes this way. Distributing single (ok, two with .pdb) executables goes a long way of easing the process - CI, tools, or just here is a tool (to a coworker) that would work for you, and right now is... oh, and you need these extra .dlls (and app.config, and now .dll.recipe files, and who knows what else).

I do really wish .NET comes to realize that single true binary is a must (without temp files extracting, this is so prone to errors, security issues, leaks, etc.)


You should read the docs because it's already available: https://docs.microsoft.com/en-us/dotnet/core/deploying/singl...


Thanks!! - Awesome to hear about this - ".NET 5.0 improves this experience by directly running the code without the need to extract the files from the app." - phew - feel relieved, I can convince now folks to upgrade to 5.0


There will be some required .dlls with your .exe. The size of .exe for a simple console app is around 60 Mb.


that's coming in .NET 6.0


Cocona [1] is my command line parsing library of choice.

1. https://github.com/mayuki/Cocona


I'm using https://www.nuget.org/packages/CommandLineParser/ for command-line parsing, but found it awkward when handling bools and their defaults are true, then you can't set to false, and in order to do so you have to make them optional, and then your flag becomes: --something=false, not terrible, but not great either, I would've preferred --no-something instead, other than that it serves me well. though would check Spectre.Console too! Thanks!


For console apps I've been using .NET Foundation's System.CommandLine [1] recently.

It looks like this Spectre.Console you are using is a bit more fancy albeit a lot newer.

1 - https://github.com/dotnet/command-line-api/


Yeah, I quite like System.CommandLine.DragonFruit particularly for what should be throwaway test apps, I like the way you just add the parameters into main.


I too have arrived at Spectre.Console as the best CLI argument handling library.

Aside: I was testing the start-up time of my program last night and discovered that it is being delayed approx. 100ms by Spectre.Console. If any contributors are on this thread, it may be worth looking at start-up performance.


Dependencies, dependencies, dependencies. All for a simple console program. And you end up with a console app that is 20-100MB. Traditionally this program was a few KB. It can still, if you use the right language(s).

Making software should not be about making the life of the developer as easy as possible, it should be about making things easy, small, and FAST for the user.


> And you end up with a console app that is 20-100MB

What are you talking about? It's 1.4MB.

This is the easiest thing in the world to verify:

    $ dotnet publish -c Release           
    Microsoft (R) Build Engine version 16.8.3+39993bd9d for .NET
    Copyright (C) Microsoft Corporation. All rights reserved.

      Determining projects to restore...
      All projects are up-to-date for restore.

      ConsoleApp -> /tmp/ConsoleApp/bin/Release/net5.0/ConsoleApp.dll
      ConsoleApp -> /tmp/ConsoleApp/bin/Release/net5.0/publish/


    $ cd /tmp/ConsoleApp/bin/Release/net5.0/publish/
    $ du -h
    1.4M .
And no, I don't want to hear about any figures for self-contained builds. It's an article about useful packages to help architecture your .NET console app. It's to be read in that context, where the .NET 5 runtime is available.


> And no, I don't want to hear about any figures for self-contained builds.

Then what is your point? When talking about the size required for a program to run, you have to weigh all of its dependencies.

For anyone interested, such a binary ends up at least 50MB, if we include the dependencies mentioned in the article in question.


> For anyone interested, such a binary ends up at least 50MB

.. if you don't make any use of assembly optimisation/trimming, sure.

    $ dotnet publish -c Release
    Microsoft (R) Build Engine version 16.8.3+39993bd9d for .NET
    Copyright (C) Microsoft Corporation. All rights reserved.

      Determining projects to restore...
      Restored /tmp/ConsoleApp/ConsoleApp.csproj (in 18.97 sec).
      ConsoleApp -> /tmp/ConsoleApp/bin/Release/net5.0/linux-x64/ConsoleApp.dll
      Optimizing assemblies for size, which may change the behavior of the app. Be sure to test after publishing. See: https://aka.ms/dotnet-illink
      ConsoleApp -> /tmp/ConsoleApp/bin/Release/net5.0/linux-x64/publish/
    $ cd /tmp/ConsoleApp/bin/Release/net5.0/linux-x64/publish/
    $ ls
    total 29M
> When talking about the size required for a program to run, you have to weigh all of its dependencies.

Except the dependencies might not even be relevant depending on which OS you're targeting - and in the context of this article, why is this even interesting? You can make all of the same criticisms of the JVM, or .NET web apps or anything else - do you statically link libc?


Just to be clear, a self-contained build includes the .NET runtime and all its libraries in your build output.

Most people just install .NET on their machines. If so, that 1.4mb executable will run just fine.


The .NET framework is usually expected to be installed at the system level.

And to what extreme do you take that line of thought? Should the kernel size be included?


in my opinion it is not the greatest argument

You write:

>And you end up with a console app that is 20-100MB.

Yea, but you can also say

>And you end up with a decent size app that is 20-100MB.

and also you can say

>And you end up with a decent size app that is 2MB

because you already have runtime, because you use .NET apps.


You may well be right for Windows. However, many command line tools need to be cross-platform, and in that situation (especially on Linux) the .NET framework/runtime very likely won't be installed.

Unless you are only doing Windows tooling, you probably do need to bear that in mind.


That is demonstrably untrue. Very few users care at all about the size of an app, and even fewer devs care to put in all the extra effort to use some unfamiliar tech stack just to save their ambivalent users' machines from using up some extra, but (usually) imperceptible CPU and RAM resources.

EDIT: That said, I do wish that the trimming/tree-shaking systems could do more to get the size and speed of easy-to-code solutions way down.


They do care, they've just gotten used to bloated software and therefore don't have high expectations. I submit that this is indicative of our failings as an industry.


Not a single non tech person I know has ever complained about the size of an app on download. Not one. They don't even look at the size its not relevant to them. They want to do something that some app enables. Besides how well it lets them do it no one cares about the size until they are out of space, which is usually because they have so many videos and photos stored.

You're kidding yourself if you think the average person looks at the size of an app.


The only case I can think of is mobile apps on slow connections. Even on a fast connection waiting a minute to download an app seems a bit annoying.


I don't see much evidence that they care until it notably impacts their free disk space, which at current storage sizes is when you reach about 1GB (or even much later, depending on your target demographic)


> Very few users care at all about the size of an app

That depends on the target audience. IME the target audience of console/CLI programs are more likely to care about size. CLI apps tend to cater more to power users. And many users do definitely care about resource usage, and smaller size apps do in fact tend to use less resources.

All users definitely care about performance. A fast app feels nicer. Smaller-sized software tends to be faster. Weird correlation? I think not.


I agree. Your points about size and speed are valid.

I think what made me defensive was the implication that these CLI libraries can be easily replaced with some small hand-coded solution. This particular library adds color coding, a structured help message based on the configuration, and some building blocks for DI, etc. Getting that stuff right in a hand-coded solution that isn't thousands of lines would take most developers a lot more time and effort.


> these CLI libraries can be easily replaced with some small hand-coded solution.

If that was your takeaway from what I wrote that was definitely not my intention. I do not think that.

I am sure several of these libraries are well-crafted, the developer(s) behind them might be excellent and have cared a great deal about performance and did their best to optimize every function. My point was more: people tend to pull in a lot more functionality/features than they need, and the few things you do need, it is often better to write yourself (you may disagree again).

This will often give you the most optimized solution, and the one with the smallest binary.

In another HN thread, someone mentioned Unity and the size of the binary required just to display a pixel, a square, or whatever their example was--although their example was not an apple-for-apple comparison and not quite fair, the point still stands, that Unity is a general solution, and if your only requirement is drawing a pixel, of course Unity is not the correct solution.


I deeply care about performant and small apps. I definitely don't want to have 100GB+ games on my disk, 150MB+ Electron based messaging apps, Docker images with 1GB+ node_modules/ folders, or even worse, Docker images with 600MB+ of binaries (looking at Terraform and it's plugins.)

However, I care even more about having apps/tools that help me achieve the desired goals, in an easy, clear, and elegant way.

If having bigger apps means that developers have more time to implement or improve features, and solve bugs, I'm all for it.


On a tangentially related note. Apparently the huge game sizes are on purpose these days. With downloads and storage so fast and cheap the developers ship assets in a format optimized for the in-game experience instead of space savings. Better to use more space and reduce the initial startup and loading times.

Contrast this with a game like Detroit.. It took my system(5700 XT and 3950x) what seemed like nearly 30 minutes to arrive at the menu screen due to having to compile all the shaders and who knows what else. I'm sure there were other considerations there, but still..


> And you end up with a console app that is 20-100MB.

FWIW, this is sort of a legacy of .NET's "enterprisey" history. Code size and trimming wasn't really a priority when most .NET code ran server-side or inside enterprises, now it is.

There's been a lot of work happening recently to trim .NET executables and publish them as single files, and it looks like .NET binary sizes will eventually be in the same ballpark as Go: https://blog.kalalau-cantrell.com/2021/01/make-smaller-conso...


Don't hold your breath. I've been keeping an eye on CoreRT and it actually slowed down and even regressed compared to Dotnet Core's rate of development while I was watching it.

I think maybe in 2016? 2018? they were committing a lot to it, probably hoping to make it production ready. Then priorities shifted and they barely advanced and now they moved it into "runtimelab" which doesn't bode well to me.


> Don't hold your breath. I've been keeping an eye on CoreRT and it actually slowed down and even regressed compared to Dotnet Core's rate of development while I was watching it.

you do not need CoreRT for trimming. but trimming is already there: https://devblogs.microsoft.com/dotnet/app-trimming-in-net-5/

it's still a long way to go, tough.


My understanding is the opposite; now that they're planning to merge Mono and CoreCLR, there is significantly more pressure on the .NET team to have a proper AOT solution.

"Early adopters can experiment with native AOT form factor" is a priority-zero (highest priority) epic for .NET 6: https://github.com/dotnet/runtimelab/issues/248


CoreRT was an experiment and it's concluded. The learnings and components like RyuJIT have now been moved to the new project called NativeAOT which has high priority.

https://github.com/dotnet/runtimelab/tree/feature/NativeAOT


> 'in the same ballpark as Go'

Not the lowest rung on the ladder.


> FWIW, this is sort of a legacy of .NET's "enterprisey" history.

So is this type of boilerplate.


In photography they say the best camera you can own is the one you have on you when you want to take a picture. Most cameras these days are cell phones.

In software, the best software is one that was built and works to solve a problem, not necessarily one that is engineered perfectly.

Otherwise slack, chrome, etc... wouldn’t routinely eat up 1GB of ram just to open.

They do, they are well liked.


Beyond that, the fact that you have to do any of this is silly. The .Net tooling used to be easy, now you’ll be able of building your entire application in python before you’re even done configuring your .Net environment.

I know I’m grossly exaggerating, but the fast movement and the lack of ease of use has been one of the primary reasons to see us slowly move toward python and powershell after nearly two decades of C#. Being a windows happy enterprise org, we still make plenty of use of the .Net family and friends of course, but not really for development.


Who said you have to? In fact you don't at all. He's only pulling in 5 dependencies. One is a command line option parser which you'd have to pull into a python project as well. The second is a bridge for that parser to integrate with the dependency injection - which you arguably may or may not need. Third & Fourth are for logging. The last one is completely unnecessary and can be ignored. Again it's just his way of doing things.

But lets not create this false narrative that somehow .NET has become super complex. Anyone can create a console application with -

dotnet new console -n TestConsoleApp


https://docs.python.org/3/howto/argparse.html

Python has an argument parser in the standard library


Yes it has, but argparse is not even comparable to [1] Click, or similar libraries.

[1] https://click.palletsprojects.com/en/7.x/


Yes but I was responding to the “have to” in: “...a command line option parser which you'd have to pull into a python project...”


There you go: https://docs.microsoft.com/en-us/archive/msdn-magazine/2019/...

argparse - or close to it :-)


> The .Net tooling used to be easy

Net tooling used to be horror. Now its good. Speaking as someone who does CI/CD work on it for a long time. You seem to speak as someone who doesn't have .net muscle memory.

Programming is complex, unless you finish with hello world (even hello world is complex in enterprise environments).


He's probably looking at it from a different angle.

As someone who has had to set up MSBuild & other BS for a CI server, I definitely agree with you.

For a dev used to double click Visual Studio, the current setups are more complex.


Indeed.

If 'it works on my computer' is a measure, then pre dot.net core era is as good as it can get.


Except it isn't. We obviously had setups that mirrored production long before docker and CI/CD became a thing, but spending time configuring those things, for every project is just insane.

Even pythons virtual environments take less time to use than .NET.

Sure I actually liked the "double click" thing in Visual Studio, but that had nothing to do with "if it works on my computer", it simply saved bundles of time because we'd publish every project to the same damn IIS instance and not have them all run their independant web-servers with their independant setups.

Hell we still put things behind the same damn IIS instance and it's load balancer and all that, because why wouldn't we? We're not Netflix, we don't need to scale to two billion people. Our max load is 50.000 concurrent users, yet our build load and our deployment pipeline is now so best-practice, SOLIDVOLID, Buzzworded, CONSOLELIEK and complicated might actually work for Netflix with enough iron.

What's worse is that, it's now your job. Not the operations dude who actually specialize in this, no yours, along with keeping up with you know, actual programming.

Being the public sector, we benchmark everything, and the things we don't benchmark we hire E&Y to benchmark, and you know what our most expensive resource has seen the biggest increase of their time going into over the past 30 years? Configuring and maintaining their tooling. Not developing new things that are useful for our actual business, no sir, but working with the tools that allow them to develop things. It's up by 130% compared to 1998.

I'm not sure what you'd call that inside big-tech, but in non-tech, we tend to call that, a waste of resources.

At least we can lower the cost by vendor-lock-in with azure, right?


> The .Net tooling used to be easy, now you’ll be able of building your entire application in python before you’re even done configuring your .Net environment.

> I know I’m grossly exaggerating...

I do not even think it is an exaggeration. Even the non-performant Python still feels like a hacker's, get-shit-done language compared to .Net. Anything .Net still feels like slow, enterprise bloat. Even "dotnet build/run" is slow, no matter size of your project.

https://github.com/dotnet/sdk/issues/8697


>Making software should not be about making the life of the developer as easy as possible, it should be about making things easy, small, and FAST for the user.

Seems a bit presumptive to decide the requirements for every console app ever made. Shouldn't it be determined on a case-by-case basis whether a 20mb difference in app size matters?

Also, why exactly does a 20mb app size mean an app is too slow? I'm not really following that line of logic.

The title of the post is "My preferred .NET console stack", not "this is how every console app should be written."


> Making software should not be about making the life of the developer as easy as possible, it should be about making things easy, small, and FAST for the user.

To be honest, I don't think the typical use case for a template like this is to "Make software", it's to solve a problem as quickly and cheaply as possible. Building software to solve these types of problems is 99% about solving the problem and 1% about how well it performs.


> Making software should not be about making the life of the developer as easy as possible, it should be about making things easy, small, and FAST for the user.

I think you’re vastly optimistic about the ratio of ESR-“real-hacker” to “I just work here” developers, as well as the amount an end user cares if the executable is a 100kb vs 100 megs.


It makes a lot of sense to make development and maintenance easy, because that is where all the costs are. Size of the resulting app only matters in certain circumstances (say, distribution of ios apps) and rarely matters for Console stuff.

Dependencies, dependencies, dependencies = Reuse, modularisation, standardisation of boilerplate


>it should be about making things easy, small, and FAST for the user.

If that would be true people would be using IRC, not Slack or Teams. Users don't care and your statement is completely false.


I am not sure most users would agree with your assessment that IRC is easy. To many it is arcane.

If you created two Telegram alternatives, that was identical except for performance, would users not choose to use the most performant one?

Statistics like these (I just found at random, but there are many like it available) demonstrates that your average user do in fact care a lot about performance:

https://www.marketingdive.com/news/google-53-of-mobile-users...


In a perfect world yes they would, but how many times has the better tech died in our history. Its too many to count.

If I wrote the perfect chat app tomorrow, I'm talking universally loved by anyone who uses it. I'm not guaranteed to succeed. If someones whole family is on iMessage and they are happy they aren't going to be able to get them to switch. So they couldn't use my app if they wanted too.

Thus they stay with the "inferior" product. Average people aren't interested in the latest greatest. They just want it to work and as long as it does, they don't care about shortcomings that don't affect them. IE app size, speed(as long as its good enough)


Ahhh, the good old days when people wrote quality software. I remember them fondly.


When were those days and how do I get back to them?

I'm being sarcastic :-)


Not bad but honestly not _that_ much of an improvement over Microsoft.Extensions.CommandLineUtils (at least from what is shown here).


My colleague had a windows issue and I have been getting a box setup using scoop and winget. Going to play with this. Day job OS X, but my win10 box is getting some love and this is good motivation to experiment. Cheers!


For Python, I've standardize on click.

It provides simple decorator to turn functions into commands, sub-commands, etc, with easy options and built-in help.


I was using click, but it requires too much boilerplate. Fire is much simpler to use, just as powerful (or maybe 95% as powerful) and it's developed by Google devs (it's not an official Google project as far as I remember), so it seems of reasonable quality.

https://github.com/google/python-fire


Click is great and a feature-rich option. But recommend checking out Typer, which is even simpler if you're already using type annotations. [0]

[0]: https://github.com/tiangolo/typer


That's why people use python, to avoid tons of .NET console boilerplate that does pretty much nothing.


"Typer stands on the shoulders of a giant. Its only internal dependency is Click."


Didn't say it was a replacement for Click. I said it was simpler if you're already using type annotations.


Not trying to be incendiary, but is there a real need for tables and graphs and all of those colors in a console application?

Usually if you want structured information from some process then you just build a web application that does its work on a schedule or when it receives an API request. Console applications are great for "fire and forget" or even being called by something else and then passing pack simple data back to the caller if necessary.

I had a co-worker that built an extremely complex table display system using ncurses. I wondered what the point was - who is going to see all of these fancy tables? The person who SSH into the system and quickly validates its correctness and then closes the connection? Or people that use VNC/Remote Desktop just to see the status of what's going on?

Edit: referring to the functionality of Spectre.Console.


No of course not, unstructured text is the pinnacle of CLI UX and anything else is a waste of oxygen. /s

I frequently SSH into systems that are not running desktop environments and being able to run htop, iftop, tmux, or any "graphical" CLI application makes my time there much more pleasant.

Yes there is a need to have structured, interactive, non-text based interfaces in a CLI.


As a .Net developer I've.. never seen people write code at work like this. This looks like a hefty amount of boilerplate to achieve practically nothing. I'm guessing this guy learned Asp.Net first.


> hefty amount of boilerplate to achieve practically nothing

It depends on what you mean by "practically nothing". If you're going to deliver a non-trivial commandline application to do something useful for other people, it needs to have robust input validation, proper auto-complete and help that's sensitive to what command subcommand/option you're trying to invoke. Would also be nice to also get an --update option more or less "out of the box"(+).

Those things add up!

(+) With .net 5, I now see that clickonce allows you to publish a console app from visual studio, but it doesn't seem to work the way one one would expect. The console app is launched from a start-menu application ref, just like a wpf app. I would have liked it to add the command to the user's path so they could just use once they installed it.


I really do disagree with you. Most of the example code that comes out of ms is just downright terrible, and encourages terrible code. They're always writing as if you're working for Reddit or Facebook scale, not the code 99.99% of their customers should be writing. I downloaded a random git project to load test a signalr app yesterday because for whatever silly compile reason I couldn't get the one inside aspnetcore to compile (which is a whole massive problem with "modern" .net in itself, how are we back to DLL hell?).

It was hundreds of lines long, didn't work properly, and once I'd culled the crap, pointless, boilerplate was a whole 20 lines.

C# really is great, you can write a lot with some really clear, obvious, terse code. But the awful, over-engineered, useless code that seemingly 75% of C# developers write is not, it's unreadable crap that adds nothing to performance, massively hinders readability and tarnishes the language.


Robust input validation and help, sure. Never had a massive use for auto-complete. Maybe my console applications are just too small in scope. I write multiple programs rather than one program with "sub-operations".

I've always used the (rather dated) library NDesk.Options [0]. Even though it was written way back in the .net 2 days. For me, it seems to hit a very nice sweet spot between power and complexity.

[0]: http://ndesk.org/Options


> auto-complete

I think now folks have gotten used to auto-complete in powershell commands. It's really harsh to not give that to them when they're expecting it. :-)


> This looks like a hefty amount of boilerplate to achieve practically nothing.

I have yet to find a good way to avoid boilerplate in handling command line input on my console applications.

My naïve self feels that it should be standard, and yet every console application operates with different nuances. It's also a part of code that I see a disproportionate amount of bugs/feedback on (for sufficiently complicated programs). I see developers consistently underestimate the work required on handling command-line input.

I don't think this guys solution is a silver bullet but I'm happy to see the methods.


Nobody should be writing one-off command line parsing code. If you don't want something fairly heavy like Spectre, then just pull in a lighweight library like CommandLineParser.


One trick you can use if you are super lazy is to just use a config.json file that always resides next to your exe. JSON files are very easy to edit/share, and can be trivially serialized in and out of your models for convenience (1 liner with either Newtonsoft or built-in serializer). JSON is also a good interchange format between systems, so it can quickly integrate into a bigger system without much hassle.


The optparse-generic library in Haskell generates a CLI directly from a struct.


Yes and no, I definitely write throwaway C# console apps all the time and just go through the hassle of generating a solution and boilerplate so I might use this kind of thing.

It's a bit heavy, I'd have to invest some time learning his framework but could be worth it. I've certainly written lots of command line parsing and logging code I'm not proud of.


Kind of my line of thinking as well. I would never pull in a library with a 100 features when my software only needs 2 of those features. I write those 2 features myself.


a) Which of those libraries do they use only 2% of the features from? One the template is filled out, won't it be higher.

b) There are certain things that look simple, but you _do not_ want to waste your time coding the corner cases yourself when this is a solved problem.

e.g. You might think that commandline args is simple, but it is very much not. if you '-file foo.txt' working, how will you handle '-file "C:\Program Files (x86)\bar\foo bar.txt" ' ?

If you get `-message hello` ? working, will your code handle `-message "hello, "friend""` ?

Having this prebuilt is useful.


Neither of your examples requires anything from the program regardless of what language it's written in, let alone what arg-parsing library it uses. Quotes are evaluated by the shell; your program gets an argv.

(Yes, Windows is complicated by having two different built-in parsers for argv [CRT and shellapi] because the kernel itself doesn't have a concept of argv. But if we're talking about .Net programs then the runtime makes that choice and gives your entrypoint an argv, so again quote-evaluation is not in the program's purview.)


My example may not be the best, but .NET Commandline handling libraries contain significant quote-handline code. I know because I have relied on it.

I'm less familiar with the library that OP is using, but it seems to be here: https://github.com/spectresystems/spectre.console/blob/main/...

Going in the other direction, generating command lines: https://github.com/natemcmaster/CommandLineUtils/blob/main/s...

I don't recommend re-inventing this


That is because he's giving a tiny example of a starter template for a console app.

If you were writing a "production-level" console app with multiple commands, logging, needed DI, input validation, nice looking progress bars, etc. Then this is pretty much what I'd want for a console template tbh.

Spectre.Console looks insanely nice.


I guess they had fun doing it! Some libraries are for fun to improve command & self confidence. Usually they are beginnings of a new delightful product. So I feel it is all for good.


As a non-.Net developer, my impression of anything .Net is always a hefty amount of boilerplate.


> public thing Thing = new Thing();

can we please please please stop making programming languages like this


In C# 9 you can write it like this:

    public Thing _thing = new();


You can do the following to shorten that depending on context:

  var thing = new Thing(); 
  Thing thing = new();


C# is almost 20 years old. What would you change this statement?


One time C++ is more terse: public: Thing thing

:) Really though this is just the old way to write C#. Even Java is starting to update this pattern.


It's rather old school style. C# has had local type inference for ages.


Jeezus, that's a lot of code to get a decent console. On Mac I just install iTerm. Does everything have to be long-winded and verbose with C#?


This is code to write a command line application, not to use the console.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: