Hacker Newsnew | past | comments | ask | show | jobs | submit | michaelmrose's favoriteslogin

No, the correct way is to have it reason from first principles:

1. "Think about what are the underlying principles for evaluating the truthiness of statements like 'X' - list them out, explain why you chose each one, what tradeoffs you made, why you believe it's the right tradeoff in this case"

2. Start a new conversation and make the system prompt be that set of principles

3. In the user prompt, ask it to decompose X into a weighted formula for those principles and give a sub-score for each principle.

4. Finally, based on the weighted sum, ask it to determine if X is true or not true, and ask it to provide a confidence score between 0 and 1 for its response


I have a number of "slash commands" listed amidst my custom instructions, which I usually use naked to amend the previous reply, but sometimes include in my query to begin with.

"There are some slash commands to alter or reinforce behavior.

/b Be as brief as possible

/d Be as detailed as possible

/eli5 or just /5 Explain like I'm 5. And you're Richard Feynman.

/r Research or fact-check your previous reply. Identify the main claims, do a web search, cite sources.

/c Complement your prior reply by writing and running code.

/i Illustrate it."


You can use network namespaces too. As a reference, here is my torrent setup:

  ip netns add torrent
  ip link add wg1 type wireguard
  ip link set wg1 netns torrent
  ip -n torrent addr add 10.67.124.111/32 dev wg1
  ip netns exec torrent wg setconf wg1 /etc/wireguard/wg1.conf
  ip -n torrent link set wg1 up
  ip -n torrent route add default dev wg1
  ip netns exec torrent ip link set dev lo up

  ip netns exec torrent transmission-daemon -f 2>&1
AFAIK it's pretty bulletproof. But for good measure I also have transmission configured to only listen on the wireguard address.

The whole point of a TPM is to reduce the possibility of a backdoor.

And you do own the private key in a TPM, what makes you think you do not. Do you "own" the private key on a U2F device or a yubikey or a tensor?

The special sauce of a TPM is the tracking of various parts of the the hardware.

No matter what, your boot loader is unencrypted. It MUST be read by the hardware to decrypt your actual OS. Without a TPM, you can modify the bootloader or the bios, or many other things that would allow an attacker access to your key.

With a TPM if anything is modified it will refuse to deliver the key. This is a level of security currently impossible without it.


How is being required to fulfill their half of the contract complicated?

One of the best levers many of us have is public forums, so this commenter is doing something great - enforcing the idea that the government should be changed in thousands of readers' minds. Once an idea gets enough traction, it often can turn into actual change. You are only reenforcing the fatalism you seem to criticize.

I don't agree. PAIP goes well beyond "introduces as much CL as necessary to carry out the task" in its Common Lisp presentation. It could easily serve as a first book for someone to learn Common Lisp, even if that person has no interest in AI whatsoever.

Some of the examples in PAIP may be outdated in a historical but not practical sense. I've joked in the past that the code is so well-written than it can be excised and framed on a wall. With code this good, the lessons one can learn are numerous and the code stands the test of time.

Unfortunately, the same can not be said about the PCL examples since - to name two - there are far better ways to do binary parsing and pathname handling today than binary-types and cl-fad. The PCL examples are a product of the time that they were written but they are tied to a particular verbose, subjective style and were not "optimal" or even particularly good, back then.

Ultimately, PCL introduces Common Lisp to the audience as does PAIP. The difference is that reading PCL in 2020 will have minimum, let's say thought-provoking, impact. Reading PAIP (or Let over Lambda, SICP, On Lisp, Lisp in Small Pieces) remains -and will remain- a paradigm-shifting experience.


I heard this joke when I was young and never really got it until I was older. ---------------

A TV reporter became lost on the back roads and stopped at a farm to get directions. As he was talking to the farmer he noticed a pig with a wooden leg. “This could be a great story for the Six O’Clock News. How did that pig lose his leg?” he asked the farmer. “Well”, said the farmer, “that’s a very special pig. One night not too long ago we had a fire start in the barn, and that pig squealed so loud and long that he woke everyone, and by the time we got there he had herded all the other animals out of the barn. Saved them all.”

“And that was when he hurt his leg?” asked the journalist anxious for a story. “Nope, he pulled through that just fine.” said the farmer. “Though a while later, I was back in the woods when a bear attacked me. Well, sir, that pig was nearby and he came running and rammed that bear from behind and then chased him off. He saved me for sure.”

“Wow! So the bear injured his leg then?” questioned the reporter. “No. He came away without a scratch. Though a few days later, my tractor turned over in a ditch and I was knocked unconscious. Well, that pig dove into the ditch and pulled me out before I got cut up in the machinery.” “Ahh! So his leg got caught under the tactor?” asked the journalist. “Noooo. We both walked away from that one.” says the farmer.

“So how did he get the wooden leg?” asked the journalist. “Well”, the farmer replied, “A pig like that, you don't eat all at once”!


No, research microkernels are generally much faster than mainstream OS kernels when you measure the operations that are implemented by both, because the crucial question for microkernel viability is whether they can be made to be fast enough; that's been true for 30 years. So an immense amount of microkernel research has focused on that one thing. You can run lmbench on your Linux laptop and get some numbers you can compare with published L4 numbers on similar hardware.

The way that microkernels are sometimes slower is, as I said above, on operations that monolithic kernels implement internally, but microkernels leave to userspace processes. For example, ping-pong IPC is something you can measure on L4, and reliably get orders of magnitude better performance than on Linux. But creating a file isn't, because L4 itself doesn't have files—you can implement them in userspace, and that can be done with varying degrees of extra overhead. It might still come out about even with Linux (I think I've seen some results to that effect) but it probably won't be orders of magnitude faster.

The part of your comment I agree with is that measuring performance is complicated.


That's how I implemented tabbed windows in 1988 for the NeWS window system and UniPress Emacs (aka Gosling Emacs aka Evil Software Hoarder Emacs), which supported multiple windows on NeWS and X11 long before Gnu Emacs did.

That seemed to me like the most obvious way to do it at the time, since tabs along the top of bottom edges were extremely wasteful of screen space. (That was on a big hires Sun workstation screen, not a tiny little lores Mac display, so you could open a lot more windows, especially with Emacs.)

It makes them more like a vertical linear menu of opened windows, so you can real all their titles, fit many of them on the screen at once, and you instantly access any one and can pop up pie menus on the tabs to perform window management commands even if the windows themselves are not visible.

https://en.wikipedia.org/wiki/Tab_(GUI)#/media/File:HyperTIE...

https://en.wikipedia.org/wiki/Tab_(GUI)

http://www.cs.umd.edu/hcil/trs/90-02/90-02-published.pdf

>Since storyboards are text files, they can be created and edited in any text editor as well as be manipulated by UNIX facilities (spelling checkers, sort, grep, etc...). On our SUN version Unipress Emacs provides a multiple windows, menus and programming environment to author a database. Graphics tools are launched from Emacs to create or edit the graphic components and target tools are available to mark the shape of each selectable graphic element. The authoring tool checks the links and verifies the syntax of the article markup. It also allows the author to preview the database by easily following links from Emacs buffer to buffer. Author and browser can also be run concurrently for final editing. [...]

>The more recent NeWS version of Hyperties on the SUN workstation uses two large windows that partition the screen vertically. Each window can have links and users can decide whether to put the destination article on top of the current window or on the other window. The pie menus made it rapid and easy to permit such a selection. When users click on a selectable target a pie menu appears (Figure 1) and allows users to specify in which window the destination article should be displayed (practically users merely click then move the mouse in direction of the desire window) . This strategy is easy to explain to visitors and satisfying to use. An early pilot test with four subjects was run, but the appeal of this strategy is very strong and we have not conducted more rigorous usability tests.

Using pie menus gesturally with "mouse-ahead display pre-emption" was a lot like of "swiping" on an iPad. And window managers tend to have directional commands: open on left or right side, resize from bottom right corner, move to top or bottom layer, etc, which correspond nicely to pie menu directions, so they're obvious, easy to learn, remember, and use without looking or waiting.

>In the author tool, we employ a more elaborate window strategy to manage the 15-25 articles that an author may want to keep close at hand. We assume that authors on the SUN/Hyperties will be quite knowledgeable in UNIX and Emacs and therefore would be eager to have a richer set of window management features, even if the perceptual, cognitive, and motor load were greater. Tab windows have their title bars extending to the right of the window, instead of at the top. When even 15 or 20 windows are open, the tabs may still all be visible for reference or selection, even thought the contents of the windows are obscured. This is a convenient strategy for many authoring tasks, and it may be effective in other applications as well.

I regularly used Emacs with tab windows and pie menus for software development and hypermedia authoring, and found them useful enough that I thought all NeWS applications should have them. So I implemented them as a globally shared extension to the NeWS window manager, independent of Emacs, so all NeWS applications got tabbed windows with pie menus. (NeWS was a lot like Smalltalk in that you could dynamically customize and extend the entire system like that.)

My later versions of tabbed windows with pie menus for NeWS in 1990 let you drag the tab around to any position along any edge you wanted. And they had pie menus designed to make window management quick and easy and very gestural, supporting "mouse ahead display pre-emption" and previewing and highlighting in the overlay plane (which was much faster to draw interactively than moving and resizing the live windows themselves).

https://www.youtube.com/watch?v=tMcmQk-q0k4

I iterated on the idea of tabbed windows and made several different versions over the lifetime of NeWS for its various GUI toolkits (Lite, NDE, TNT):

https://donhopkins.com/home/archive/emacs/emacs.ps

https://donhopkins.com/home/archive/NeWS/tabwin.ps

https://donhopkins.com/home/archive/NeWS/old-xnews-tab.ps

https://donhopkins.com/home/archive/NeWS/tabframe-1.ps

https://donhopkins.com/home/archive/NeWS/tab-3.0.2.ps

    % Pie menus and tab windows and NOT patented or restricted, and the
    % interface and algorithms may be freely copied and improved upon. 
At Sun, we even made an X11 ICCCM window manager that wrapped tabbed window with pie menus around nasty old rectangular X-Windows:

https://news.ycombinator.com/item?id=15327339

https://donhopkins.com/home/archive/NeWS/owm.ps.txt

Tabs aren't only for top level windows or frames. You can also attach tabs to any side of nested windows, so you can drag them around anywhere freely, then stick them on a "stack" to constrain their movement, so you can drag them up and down to rearrange their order on the stack, and pop them off the stack by dragging them far enough away. The tabs can have pie menus with a standard set of window management commands, as well as a submenu item for content-specific commands, so they're easy to learn (because of their common layout) and also possible to customize (because of their type specific submenu).

https://medium.com/@donhopkins/the-shape-of-psiber-space-oct...

>There is a text window onto a NeWS process, a PostScript interpreter with which you can interact (as with an "executive"). PostScript is a stack based language, so the window has a spike sticking up out of it, representing the process's operand stack. Objects on the process's stack are displayed in windows with their tabs pinned on the spike. (See figure 1) You can feed PostScript expressions to the interpreter by typing them with the keyboard, or pointing and clicking at them with the mouse, and the stack display will be dynamically updated to show the results.

>Objects on the PSIBER Space Deck appear in overlapping windows, with labeled tabs sticking out of them. Each object has a label, denoting its type and value, i.e. "integer 42". Each window tab shows the type of the object directly contained in the window. Objects nested within other objects have their type displayed to the left of their value. The labels of executable objects are displayed in italics. [...]

>Tab Windows: The objects on the deck are displayed in windows with labeled tabs sticking out of them, showing the data type of the object. You can move an object around by grabbing its tab with the mouse and dragging it. You can perform direct stack manipulation, pushing it onto stack by dragging its tab onto the spike, and changing its place on the stack by dragging it up and down the spike. It implements a mutant form of “Snap-dragging”, that constrains non-vertical movement when an object is snapped onto the stack, but allows you to pop it off by pulling it far enough away or lifting it off the top. [Bier, Snap-dragging] The menu that pops up over the tab lets you do things to the whole window, like changing view characteristics, moving the tab around, repainting or recomputing the layout, and printing the view.

Here's some more stuff I wrote about tabbed windows in HN:

https://news.ycombinator.com/item?id=16876520

>Unfortunately most of today's "cargo cult" imitative user interface designs have all "standardized" on the idea that the menu bars all belong at the top of the screen and nowhere else, menus items should layout vertically downward and no other directions, tabs should be rigidly attacked to the top edge and no other edge, and the user can't move them around. But there's no reason it has to be that way.

Now Firefox and Chrome still need decent built-in universally supported and user customizable pie menus, but unfortunately the popup window extension API is inadequate to support them, because there's no way to make them pop up centered on the cursor, or control the popup window shape and transparency. Unfortunately they were only thinking of drop-down linear menus when they designed the API. (Stop thinking inside that damn rectangular box, people!)

But I remain hopeful that somebody will eventually rediscover pie menus in combination with tabbed window for the web browser, and implement them properly (not constrained to pop up inside the browser window and be clipped by the window frame, and not just one gimmicky hard coded menu that user's can't change and developers can't use in their own applications). But the poorly designed browser extension APIs still have a hell of a lot of catching up to do with what it was trivial to do in NeWS for all windows 30 years ago.

https://news.ycombinator.com/item?id=8041232

>These things used to rub me the wrong way in the 90's, but I've learned to take it in stride. I think it's great that people are rediscovering and trying out old ideas in new systems, and perhaps the understandable belief that something's never been done before isn't so bad, if it motivates them to keep trying out different ideas in new contexts, and leads to even more great stuff that's never been done before, or even just re-implementations of old ideas that aren't as ugly as they used to be the first time around.

>So many "modern" user interfaces are such cargo cult carbon copies of each other (like tabs along just the top edge, or that way "flat" is such a big fad these days), that it's easy to get the impression that anything slightly different is actually original.

>Back in the day, we had no choice but to draw "flat" user interfaces, because all we had was black and white, and moving the cursor across the screen was uphill both ways!

Here's an old todo list from 1987 with some other crazy ideas I (fortunately or not) never got around to implementing in NeWS. The "stack of cards with indexing tabs" would be a generic widget that let you easily flip through and manage an editable stack of things using tabs with pie menus:

https://donhopkins.com/home/archive/NeWS/news.todo.txt

What I was getting at was an extensible generalization of tabs, kind of like Scratch's tools that appear around the selected object: customizable widgets that stick out from the edges of rectangular windows, that could be various shapes (like ears or antennae) and do various things (screen snapshot, scrolling, navigation, window management, application commands, etc).

A window could have one main window management title tab (and possibly others) that were always visible, and when it was activated, other auxiliary tabs of various types could open up in the last place you left them. Some could represent iconified sub-windows, like tool pallets or nested sub-windows that you could open up recursively, like an outliner.

You could attach many different kinds of tabs to the edge of a window, move them around, hide and reveal them, and open their property sheets and help screens, etc. And you could plug in new kinds like browser extensions, or interactively script your own and copy and paste them around like a HyperCard window manager. That's how the web browser and window manager should work together seamlessly.


After 15 years, I still struggle to test if a string is empty in shell languages. Do I use square brackets, double square brackets, equal sign, double equal sign, test -n, set -q, wtf mate? Do I need to wrap my string variable in double quotes? single quotes?

Fish looks great, and I'm going to give it a try, but I ran into these same old shell scripting issues within 2 minutes of trying to configure my prompt, which is a bummer. Took me 10 minutes (I'm ashamed to say) to end up at `if test -n "$git_branch"` (double quotes are critical).

I'll be excited when someone invents a shell that feels natural in this respect. Perhaps the nature of shells and commands makes this impossible? Or are we just stuck in a box?


There are other ways to do subshells than using the same syntax, just that the Fish developers never did it, maybe because it's not requested enough. I think it's still the most annoying thing about Fish, and the most convincing reason to switch away again. I frequently drop into bash to do

    (cd subdirectory && command --that might.fail) && run --command-in-original-dir --only-if-subshell-succeeded
... and expect to still be in the original dir when it's done. Any solution in Fish using "&& cd -" at the end of a block AFTER a maybe-failing command is just wrong, and there seriously isn't any way except saving and restoring $PWD every time you want subshell functionality, or using "; cd -" and manually saving the $status, which is equally as frustrating.

There are some non-syntactic suggestions in https://github.com/fish-shell/fish-shell/issues/1439


I think this article makes a simple concept seem unnecessarily convoluted. The first paragraph says:

> This very complex object coming from the Category Theory is so important in functional programming that is very hard to program without it in this kind of paradigm.

May I try to dispel this myth that monads (and functional programming ideas in general) are complex?

New functional programmer are often faced with a dilemma: how do I do side effects (e.g., mutable state) in a purely functional way? Isn't that a contradiction? In 1989, Eugenio Moggi gave us a compelling answer to these questions: monads. The idea of monads originally comes from category theory, but category theory is not at all necessary to understand them as they apply to programming.

We start with this: since a pure functional programming language doesn't have step-by-step procedures built into the language, we instead have to choose our own suitable representation for impure programs and define a notion for composing them together. For example, a stateful program (a program that has read/write access to some piece of mutable state) can be represented by a function which takes the initial state and returns a value and the new state (as a pair).

So, for example, if the mutable state is some integer value, then stateful programs that return a value of type `a` will have this type:

  Int -> (a, Int)
Let's give that type constructor a convenient name:

  StatefulProgram(a) = Int -> (a, Int)
One key piece of the story is that this type constructor is a functor, which is just a fancy way to say that we can `map` over it (like we can for lists):

  map(f, program) = function(initialState) {
    (result, newState) = program(initialState)
    return (f(result), newState)
  }
Then, if we have a stateful program which produces a string (i.e., its type is `StatefulProgram(String)`) and a function `stringLength` which takes a string and returns its length, we could easily convert the program into a new program that returns the length of the string instead of the string itself:

  newProgram = map(program, stringLength)
Another way to phrase this is: we can compose a stateful program with a pure function to get a new stateful program. But that's not quite enough to write useful programs. We need to be able to compose two stateful programs together. There are a few equivalent ways to define this. In Haskell, we would have a function pronounced bind that takes a stateful program and a callback. The callback gets the result of the stateful program and returns a new stateful program to run next.

  bind(program, callback) = function(initialState) {
    (result, newState) = program(initialState)
    newProgram = callback(result)
    return newProgram(newState)
  }
Some languages call this function `flatMap` instead of `bind`. In JavaScript, this is like the `then` function for promises. Whatever we call it, we can easily use it to write a helper function which sequences two stateful programs:

  sequence(program1, program2) = bind(program1, function(result) {
    return program2
  })
This amounts to running the first program, throwing away its result (see that the `result` variable is never used), and then running the second program.

One more ingredient is needed to make this `StatefulProgram` idea really useful. We need a way to construct a stateful program that just produces a value without touching the state. We'll call this function `pure`:

  pure(x) = function(initialState) {
    return (x, initialState)
  }
Here's what makes `StatefulProgram` a monad:

a) First of all, it needs to be a functor. That amounts to having a `map` function like we defined above.

b) We need a way to construct a `StatefulProgram(a)` given an `a`. That's our `pure` function.

c) We need some notion of composition. That's given by our `bind` function. (And note that `sequence` is just a special case of `bind` where the callback doesn't use its argument.)

Category theory also gives us some common sense laws that monads must satisfy. For example, `bind(pure(x), callback) = callback(x)`.

The brilliant insight of Eugenio Moggi is that these three functions are essentially an interface for any kind of side effect. Mutable state is just one example. We could represent other kinds of side-effectful programs in the same way. For example, a program which returns multiple times could be represented as a list. Then the `map` and `flatMap`/`bind` functions are exactly what you expect, and the `pure` function just constructs a list with a single element. Other examples of monads are IO (for interacting with the operating system), continuations (for doing fancy control flow), maybe/optional (for programs that may return a `null` value), exceptions, logging, reading from an environment (e.g., for threading environment variables through your program), etc. They all have the same interface, which is represented in Haskell as the `Monad` type class (type classes are Haskell's notion of interfaces).

Haskell also provides a convenient syntax called `do notation` for working with monads (this is a vast generalization of the async/await syntax that is creeping into some popular languages). For example, a stateful program that reads the state and mutates it could be written like this:

  program = do
    x <- get     -- Read the state
    put 3        -- Update the state
    pure (x + 3) -- Return what the state used to be plus 3
In our syntax, that would be equivalent to writing:

  program = bind(get, function(x) {
    return bind(put(3), function(result) {
      return pure(x + 3)
    })
  })
That callback hell is quite an eyesore, and I think that's one of several reasons why monads are not very popular outside of the Haskell community.

I migrated all of my services to k8s in the last ~6 months. The biggest hurdle was the development environment (testing and deployment pipelines). I ended up with a homebrewn strategy which happens to work really well.

# Local development / testing

I use "minikube" for developing and testing each service locally. I use a micro service architecture in which each service can be tested in isolation. If all tests pass, I create a Helm Chart with a new version for the service and push it to a private Helm Repo. This allows for fast dev/test cycles.

These are the tasks that I run from a "build" script:

* install: Install the service in your minikube cluster

* delete: Delete the service from your minikube cluster

* build: Build all artifacts, docker images, helm charts, etc.

* test: Restart pods in minikube cluster and run tests.

* deploy: Push Helm Chart / Docker images to private registry.

This fits in a 200 LOC Python script. The script relies on a library though, which contains most of the code that does the heavy lifting. I use that lib for for all micro-services which I deploy to k8s.

# Testing in a dev cluster

If local testing succeeds, I proceed testing the service in a dev cluster. The dev cluster is a (temporary) clone of the production cluster, running services with a domain-prefix (e.g. dev123.foo.com, dev-peter.foo.com). You can clone data from the production cluster via volume snapshots if you need. If you have multiple people in your org, each person could spawn their own dev clusters e.g. dev-peter.foo.com, dev-sarah.foo.com.

I install the new version of the micro-service in the dev-cluster via `helm install` and start testing.

These are the steps that need automation for cloning the prod cluster:

* Register nodes and spawn clean k8s cluster.

* Create prefixed subdomains and link them to the k8s master.

* Create new storage volumes or clone those from the production cluster or somewhere else.

* Update the domains and the volume IDs and run all cluster configs.

I haven't automated all of these steps, since I don't need to spawn new dev clusters too often. It takes about 20 minutes to clone an entire cluster, including 10 minutes of waiting for the nodes to come up. I'm going to automate most of this soon.

# Deploy in prod cluster

If the above tests pass I run `helm upgrade` for the service in the production cluster.

This works really well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: