I can see this becoming a pretty generally accepted AI usage policy. Very balanced.
Covers most of the points I'm sure many of us have experienced here while developing with AI. Most importantly, AI generated code does not substitute human thinking, testing, and clean up/rewrite.
On that last point, whenever I've gotten Codex to generate a substantial feature, usually I've had to rewrite a lot of the code to make it more compact even if it is correct. Adding indirection where it does not make sense is a big issue I've noticed LLMs make.
> AI generated code does not substitute human thinking, testing, and clean up/rewrite.
Isn't that the end goal of these tools and companies producing them?
According to the marketing[1], the tools are already "smarter than people in many ways". If that is the case, what are these "ways", and why should we trust a human to do a better job at them? If these "ways" keep expanding, which most proponents of this technology believe will happen, then the end state is that the tools are smarter than people at everything, and we shouldn't trust humans to do anything.
Now, clearly, we're not there yet, but where the line is drawn today is extremely fuzzy, and mostly based on opinion. The wildly different narratives around this tech certainly don't help.
> Isn't that the end goal of these tools and companies producing them?
It seems to be the goal. But they seem very far away from achieving that goal.
One thing you probably account for is that most of the proponents of these technologies are trying to sell you something. Doesn't mean that there is no value to these tools, but the wild claims about the capabilities of the tools are just that.
No, the code is generated by a tool that's "smarter than people in many ways". So which parts of "thinking, testing, and clean up/rewrite" can we trust it with?
Trust is a function of responsibility, not of smarts.
You may hire a genius developer that's better than you at everything, and you still won't trust them blindly with work you are responsible for. In fact, the smarter they are than you, the less trusting you can afford to be.
Very little, until it stops being stupid in many ways. We don't need smart, we need tools to not be stupid. An unreliable tool is more dangerous and more useless than having no tool.
This is such a good write-up and something I'm struggling with very hard. Does quality of code in the traditional sense even matter anymore if e.g. CC can work with said code anyway. I haven't had imposter's in a long time, but it's spiking hard now. Whenever i read or write code I feel like I'm an incompetent dev doing obsolete things.
Everything except the first provision is reasonable. IMO it's none of your damn business how I wrote the code, only that I understand it, and am responsible for it.
It's one of those provisions that seem reasonable, but really have no justification. It's an attempt to allow something, while extracting a cost. If I am responsible for my code, and am considered the author in the PR, than you as the recipient don't have a greater interest to know than my own personal preference not to disclose. There's never been any other requirement to disclose anything of this nature before. We don't require engineers to attest to the operating system or the licensing of the tools they use, so materially outside your own purant interests, how does it matter?
It's a signal vs noise filter, because today, AI can make more mistakes. Your operating system or IDE cannot lead you to make a similar level or amount of mistakes while writing code.
It is of course your responsibility, but the maintainer may also want to change their review approach when dealing with AI generated code. And currently, as the AI Usage Policy also states, because of bad actors sending pull requests without reviewing or taking the responsibility themselves, this acts as a filter to separate your PR which you have taken the responsibility for.
Maintenance, for one. I imagine contributions that are 100% AI generated are more likely to have a higher maintenance burden and lower follow-up participation from the author in case fixes are needed.
I think I’m going to use it as a guide for our own internal AI guideline. We hire a lot of contractors and the amount of just awful code we get is really taking a toll and slowing site buildouts.
What I understood from this is that LinkedIn and Email outreach are quite effective for leads. 1-on-1 conversations and the obsessive focus on solving problems different customers face do feel the right way to go about sales.
We just launched Fostrom [1], an IoT Cloud Platform designed for developers. I was wondering what else have others found effective in this space to do sales and outreach?
I'm building Fostrom (https://fostrom.io), an IoT Cloud Platform. We have Device SDKs to simplify integrating devices, powered by a small Device Agent written in Rust.
I wanted to support RISC-V boards too, so I went with the Milk-V Duo S as the test device. I have managed to get Tailscale working, and our Device SDK works too, with the bundled Python.
The experience of using the Milk-V Duo is definitely not as straightforward as the Pi Zero, but it does work, and is easily available in most places, unlike some of their other products. The Linux distro they provide is quite barebones, and I wasn't able to get Debian working. The docs for the device are pretty decent. I hope we get better support for Debian/Alpine/Arch for these kinds of boards soon.
In Fostrom, devices connect via our SDKs or standard protocols such as MQTT and HTTP, and send and receive structured, typed data, through pre-defined Packet Schemas. Each device gets its own sequential mailbox for messages. You can trigger webhooks or broadcast messages to other devices based on incoming data, powered by programmable actions (written in JS).
We entered Technical Preview recently. Since then, we've been working on:
- Major upgrades to Actions: making it easier to write action code, along with testing before deploying, and more docs on how to write good actions. Coming this week.
- We're in the process of releasing Device SDKs in multiple languages, including JS, Python, and Elixir soon. The SDKs are powered by an underlying lightweight Device Agent written in Rust.
- A new data explorer to view and analyze your fleet's datapoints, which will be available in a few weeks.
Happy to answer questions and appreciate any feedback.
Reading this post reminded me of another book I read a few years ago: Curious Moon [0].
It is written as a novel that teaches PostgreSQL by exploring the dataset of the Cassini orbiter around Enceladus, Saturn's moon. Highly recommended and fun read.
I recommend trying Javy[0]. Javy allows you to build a WASM file that includes Javy's JS interpreter along with your JS source code. Note that Javy is a heavily sandboxed environment so it doesn't have access to the internet, or npm modules, a desirable feature for running user code.
We're building an IoT Cloud Platform, Fostrom[1] where we're using Javy to power our Actions infrastructure. But instead of compiling each Action's JS code to a Javy WASM module, I figured out a simpler way by creating a single WASM module with our wrapper code (which contains some further isolation and helpful functions), and we provide the user code as an input while executing the single pre-compiled WASM module.
A developer-focused IoT Cloud Platform. The idea stems from pain points experienced while automating an indoor farm a few years ago where I had to spend way too much time building the data collection and analysis infrastructure instead of focusing on the actual automation.
Devices connect via secure MQTT, HTTP, or WebSockets and send structured, typed data. Each device gets its own sequential mailbox for messages. You can trigger webhooks or broadcast messages to other devices based on incoming data, powered by programmable actions.
Just deployed to production. Currently working on Device SDKs (coming very soon) and time-series analytics. Check out the platform, we're in technical preview now. Happy to answer questions and appreciate any feedback.
We're currently working on a data explorer which will allow you to choose, filter, and sort datapoints in a table view. We're gonna add time-based aggregations, charts, anomaly detection, and correlation analytics soon after.
Any feedback or suggestions on what you would like to see?
Problem with that is that it's global. It can even affect websites if they use a media query for it. I'm ok/enjoy the other animations (I rarely see them, but appreciate when they happen), the only animation I wanted to disable was the desktop switch one.
It was once possible I think, but it seems that's impossible on new versions of macOS. Spaces are unusable because of that as a programmer. I'd love to have a terminal in one space and browser in the other, but the delay in switching between both is very noticeable and considering how many times I'd do that it'd probably take minutes off my day.
Yes, it is global. Even running `matchMedia("(prefers-reduced-motion)")` in the browser console returns true. I see no way of disabling reduced motion only for Safari either.
That being said, if you do decide to use spaces, I want to point out a MacOS setup that would help you to keep apps on different spaces and have an experience (slightly) closer to i3wm and other window managers.
First, you should create 10 spaces. Then go to Settings -> Keyboard -> Keyboard Shortcuts -> Mission Control -> Expand the Mission Control dropdown. You'll see options to set keyboard shortcuts for each workspace there. I've set it to Option+{1-9, 0 for 10}.
Then just open some of the permanent apps you use, and right click on their Dock icon -> Options -> Assign to this desktop. I keep the browser in workspace 1, and messaging app in workspace 10.
I know this isn't the best solution, but behind crazy-hidden settings, it is possible to get a pretty decent solution for window management on macOS. Ohh also, I use Amethyst sometimes, for i3wm-like window layouts, and it allows you to set shortcuts to move apps from one workspace.
one workaround I use is not to use Spaces but custom Alfred hotkeys for most popular apps that you often switch such as: iterm, browser, xcode, vsc, file browser + rectangle.app for shortcuts to maximize/minimize app (without default animations). Takes some time to train muscle memory though - I use cmd+ctrl+(j or or k or l or ;) to switch between most frequent apps. I also use F1 as hot key for iTerm (quake style) and F2 for Dash.app
I think it still takes like half a second for focus to change, which is the thing I care more about. I would be fine if I could switch and then immediately start typing before the animation was finished.
uBlock Origin will take care of blocking tracking popups/banners for you if you enable the "EasyList cookie" filterlist in the settings. It is not activated by default.
Covers most of the points I'm sure many of us have experienced here while developing with AI. Most importantly, AI generated code does not substitute human thinking, testing, and clean up/rewrite.
On that last point, whenever I've gotten Codex to generate a substantial feature, usually I've had to rewrite a lot of the code to make it more compact even if it is correct. Adding indirection where it does not make sense is a big issue I've noticed LLMs make.
reply