Hacker Newsnew | past | comments | ask | show | jobs | submit | tercmd's commentslogin

The state can be re-imported like this: ``` a = (insert JSON output here) window.$nuxt.$root.$children[2].$children[0].$children[0]._data.elements = a.elements; window.$nuxt.$root.$children[2].$children[0].$children[0]._data.discoveries = a.discoveries; ```


I made a bookmarklet that loads the state from localstorage and also autosaves the state on each new craft

    javascript:(function(){
        const exportState = () => JSON.stringify({
            discoveries: window.$nuxt.$root.$children[2].$children[0].$children[0]._data.discoveries, 
            elements: window.$nuxt.$root.$children[2].$children[0].$children[0]._data.elements
        });

        const importState = (state) => {
            const { discoveries, elements } = JSON.parse(state);
            const gameInstance = window.$nuxt.$root.$children[2].$children[0].$children[0]._data;
            gameInstance.discoveries = discoveries;
            gameInstance.elements = elements;
        };

        /* Set up a MutationObserver to listen for changes in the DOM and automatically export the current state. */
        const observer = new MutationObserver((mutations) => {
            const state = exportState();
            localStorage.setItem('gameState', state);
        });

        /* Start observing DOM changes to auto-save the game state. */
        const startObserving = () => {
            const targetNode = document.querySelector('.sidebar');
            observer.observe(targetNode, { childList: true, subtree: true });
        };

        /* Check for a saved state in localStorage and import it if available. */
        const savedState = localStorage.getItem('gameState');
        if (savedState) importState(savedState);
        else localStorage.setItem('gameState', exportState() );

        startObserving();
    })();


I used this to import my own terms.

This can be used to get a novel starting point (disregarding the original starting point).

It can also be used to start from unreachable elements, although it isn't clear to me exactly how the "First discovery" works, e.g. will your unreachable elements pollute the neal.fun datastore, or only the byproducts? Either way, it is interesting.


By the way you missed a semicolon after `(insert JSON output here) `


The k-anonymity API makes it such that the password doesn't have to be sent to HIBP, but the first 5 characters of its SHA1 hash.

This returns a list of possible suffixes which can be checked for the actual password to see how many have been breached.

For example, a search for "abc" with the hash "a9993e3...89d" becomes:

`curl -s https://api.pwnedpasswords.com/range/A9993 | grep -i e364706816aba3e25717850c26c9cd0d89d`

which returns `E364706816ABA3E25717850C26C9CD0D89D:226273` indicating that the password has been seen 226,273 times


The Pwned Passwords lookup returns just the number of breaches without any hashes or plaintext passwords. This would not make such an attack possible.


That’s good to know. So they’re probably using old dumps.


(Creator of the repo here) The repo now has some client-side JS (unminified). https://github.com/terminalcommandnewsletter/everything-chat...


If you block the `/backend-api/moderations` endpoint, that (as expected) doesn't block the AI from not giving you an answer.


(Creator of the repo here) Likely ONLY due to HN, the stats for the repo have BLOWN UP in the last day. It went from ~200 unique visitors and visits in one day to 8,689 unique visitors with 10,000+ total visits!


I've not tested it further, but I wondered what happened when I saw a "Plus subscriber login link" input field - resulted in that. In fact, I wonder if one could automate the request to the API endpoint (all headers, authorization and everything) when the browser sees a "ChatGPT down" message. (Of course, you have to click the link in the email, but if you grant the tool access to your email to automate that, that probably isn't the best thing for "principle of least privilege")


In some of the ChatGPT client-side JS (visible in the Debugger tab of DevTools), I could see references like `\triangle`. However, asking ChatGPT to print either as it is just prints the text itself.


This prompt works best with GPT 4. GPT 3.5 gives inconsistent results. There might be a way of improving the prompt for 3.5


Turns out, this actually does work.


(Creator of the repo here) That is somewhat present in the `/backend-api/moderations` endpoint where the app sends your question + ChatGPT's response to (first time just your question, second time both)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: