Hacker Newsnew | past | comments | ask | show | jobs | submit | pahn's commentslogin

I made an art installation on this question once: https://bildsignal.de/p_derweil

"derweil is an interactive video installation correlating time, space and big data to provide tailor-made instructions on how to get lost. Materials: Google Directions and Streetview APIs, JavaScript, NW.js, cables.gl, Involt, Arduino IDE, computers, thermal paper, plastic, metals, wood."

(I doubt I could still run this today, though. I used some kind of 'hack' to bypass Google Streetview API limitations and I'm pretty sure they fixed this ages ago… ;D)


It's money. If you look at investement into the railway system per capita, Italy actually spends more on their railway system than Germany: https://www.allianz-pro-schiene.de/presse/pressemitteilungen...


Mpv is great! And also the only player I found to be externally controlable, e.g. through a hardware jog/shuttle or an Arduino. This might be outdated (from 2018), but for reference (me talking to myself ;): https://softwarerecs.stackexchange.com/questions/53208/video...


Cables is absolutely fantastic. I used it personally for an art project, as well as was involved with a commerical AR experience which used cables to run elaborate, fully interactive 3d scenes in a normal browser, on mobile. As with other node based languages (e.g. vvvv, max/msp), you edit your code while it's running, so you directly see what you're doing without constantly switching interfaces. And in the end it generates a js file you can just embed in an iframe. Honestly, no idea why this is not more widely used, huge fan!


Um… a friend of mine actually made an artwork on this. When life imitates art: https://alexanderpeterhaensel.com/smiletovote


…well, if it runs. Because all these predictive-auto-whatever features also break things: Eg., I have a bug in Apple Mail [1], which basically breaks "entering text into a computer using a keyboard" – a problem I would have thought was solved some 70+ years ago, but alas, here we are…

[1] https://discussions.apple.com/thread/255409297


I read almost all of this, and am kind of torn on this ressource: On one hand this is a great introduction to ComfyUI, including concepts and possibilities, and it really IS helpful for getting started. Thanks to the authors for putting in the work!

That said, some of the information, especially regarding the more technical parts, is somewhat misleading (debatable, simplification is hard), and at times outright false (e.g. "This method is called LoRA (Loopy Recurrent Attention)"… what?), so don't make it your only source of info. Eg. I found this also helpful: https://blenderneko.github.io/ComfyUI-docs/


I had an art exhibition last week, where I tried to replace myself as an artist with a Henry Ford-style production line (including a moving conveyor belt), run by different AI models to produce my art. The system looks for inspiration in my emails, my bookshelf, surfs reddit and also HN, and then creates pretty fancy installation concepts plus the matching installation views. The 125 artworks it produced within a day and a half can all be checked out here: https://maschinenzeitmaschine.de/p_doppelgaenger_gallery/

Under the hood, it's Python (Flask), mostly using GPT-3 & 4 plus Midjourney (with fallbacks to DALL-E and local Stable Diffusion as using Midjourney programmatically is pretty fragile). I tried fine-tuning models, but as I did not have enough data, I got better results just chaining complex, modular prompts. The conveyor belt was made out of spare parts for treadmills… feel free to ask any questions!


The Midjourney outputs look fantastic! Are those the prompts used in the gallery? Was post-processing involved? Usually there's a very distinctive look to Midjourney images, but these look very believable.


Thank you! I did not use the concepts directly as prompts, but did ask the concept-model to also output a visual description (which was only used internally). This description I then piped through GPT4 (which was much better at this then GPT3), then added some modifiers, and only then sent it to Midjourney. You will often note though, that language comprehension is not really on point – like often it does show the right elements in the image, but not necessarily in the same relationship to another as they are described in the concept...


For the second part of your question regarding style and postprocessing: In terms of style, what really helped was to use "--style raw" and also to give it very specific locations like "…in a contemporary art museum". I had a whole bunch of these which where randomly added to the prompt. In terms of postprocessing, I upscaled the images with Stable Diffusion locally (using imaginAIry) and then added some grain, but that's it.


their 404 error page is the best sounding one i have seen so far: https://learningmusic.ableton.com/wronglink


Just as it took me myself quite a while to find that: A simple, good and open-source solution for 2fa on iOs is Tofu. App store: https://apps.apple.com/us/app/tofu-authenticator/id108222930... Github: https://github.com/iKenndac/Tofu (not affiliated, just a happy user)


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: