A good ADHD coach can be life-changing, as cheesy and corny as it sounds. As you’re likely aware (but for the benefit of everyone else), the ADHD brain is not extrinsically and intrinsically motivated by the exact same things of someone with a more neurotypical brain.
An ADHD brain is more motivated by external Urgency while a more neurotypical will be motivated externally by Importance. This often leads to ADHD people being perceived as anywhere from flakey, unreliable, to totally unconcerned with problems and/or commitments. In social relationships, they are perceived as being anywhere from touch and go to just unwilling to invest into social bonds. The worst part is that if you are able to mask well in one area, you just don’t have the energy to mask everywhere else. It’s like an unwinnable game of wack-a-mole.
It has taken many years and losing so much to get to where I am and it’s still objectively shit. Knowing that every in the world is not made for how your brain functions and is often made specifically for how the majority of people’s brain functions is depressing.
I’ve found that very carefully selecting work that lets me lean into my specific strengths (good under pressure, able to dive deep on technical problems and pull out results, and being a good business
communicator), in conjunction with aggressively automating or pre-preparing parts of my life that suck (laying out clothes/tools/equipment the night before, having a checking “escrow” account all my auto bill pay gets pulled out of) helps me function more efficiently.
It gets better, you can carve newer and deeper neural-pathways by sticking to routines, and finding ways to get that dopamine.
I want to learn how to properly build a habit/routine that isn't dopamine driven. I have ADHD and I recently realized, even as I'm pushing 50 years old, that all of my hobbies, habits, and "likes" have been dopamine driven. I need to learn how to exclude the reward of a dopamine hit as the reason for doing something. Resources welcome.
Slightly OT as it's about API not web app, but I think related to the fact that they switch out models without informing people pretty often, and I think do a fair amount of A/B testing -- so it's very "black box" from a user perspective still.
Since a few weeks ago, the API is just completely ignoring my system prompt. I have things like "never apologise" and "always assume the user has already consulted with competent authorities and other sources. Do not give advice not directly asked for." and "be succinct and do not repeat parts of the question in your answer". and "Do not use phrases like "it is essential to note" or "it is important" to try cut through its BS hedging.
This used to work great. Now I just get the same responses as if I hadn't put a system prompt in at all
> With no real knowledge of LLM and only recently started to understand what LLM terms mean, such as 'model, inference, LLM model, intruction set, fine tuning' whatelse do you think is required to make a took like yours?
This was mee a few weeks ago. I got interested in all this when FlexGen (https://github.com/FMInference/FlexGen) was announced, which allowed to run inference using OPT model on consumer hardware. I'm an avid user of Stable Diffusion, and I wanted to see if I can have an SD equivalent of ChatGPT.
Not understanding the details of hyperparameters or terminology, I basically asked ChatGPT to explain to me what these things are:
Explain to someone who is a software engineer with limited knowledge of ML terms or linear algebra, what is "feed forward" and "self-attention" in the context of ML and large language models. Provide examples when possible.
I did the same with all the other terms I didn't understand, like "ADAM optimizer", "gradient", etc. I relied on it very heavily and cross-referenced the answers.
Looking at other people's code and just tinkering with things on my own really helped.
Through the FlexGen discord I've discovered https://github.com/oobabooga/text-generation-webui where I spent days just playing around with models. This got me into the huggingface ecosystem -- their transformers library is an easy way to get started. I joined a few other discords, like LLaMA Unofficial, RWKV, Eleuther AI, Together, Hivemind and Petals.
I bookmarked a bunch of resources but it's very sporadic. Here are some:
Prompt: create a rap song about a guy named Roman writing .NET APIs
Sure, here is a possible rap song about a guy named Roman writing .NET APIs:
Verse 1:
I'm Roman, and I'm a code king
I write .NET APIs, and I do my thing
I've got Visual Studio on my screen
Writing code that's clean and lean
Chorus:
Roman, Roman, writing .NET APIs
Roman, Roman, coding all day
Verse 2:
I'm a master of C#, no doubt
I write APIs that always work out
I'm a whiz with classes and objects too
I can handle any challenge that comes my way
Chorus:
Roman, Roman, writing .NET APIs
Roman, Roman, coding all day
Verse 3:
I'm a pro with ASP.NET and Web API
I can build a server in a jiffy
I know my way around REST and JSON
I can make your data flow like a river
Chorus:
Roman, Roman, writing .NET APIs
Roman, Roman, coding all day
Verse 4:
I'm a rockstar with Azure and the cloud
I can make your apps run smooth and proud
I'm a wizard with microservices and containers
I can help your business grow and prosper
Chorus:
Roman, Roman, writing .NET APIs
Roman, Roman, coding all day.
Outro:
Roman, Roman, the .NET API man
Writing code that's always in demand
Roman, Roman, the code hero
Writing .NET APIs for the win.
This doesn’t seem as simple as sticking your IoT devices behind a /64. Your site should be assigned at least a /56 from your ISP giving you 255 possible /64s. If you need more subnets you need to get a /48 from your provider. And if 2^16 subnets for your IoT devices is insufficient you will need to go to your Regional Internet Registry to get more.
I see this is to avoid issues with Neighbor Discovery Protocol and isolate hosts but to call this simple when you already will need to deploy a security solution(s) beyond subnetting to enforce your network policies is what I take issue with. IPV6 address planning and security is not the same as IPV4.
This talk[0] is an hour long but from 22:18 on they discuss their recommended IPV6 address plan (the whole video really opened my eyes to IPV6 and made it all much more understandable). You shouldn’t subdivide below /64, as RFC 5375[1] does not recommend it and as discussed here[2] there are some pitfalls you can encounter.
It can be relatively inexpensive, depending on which RIR you’re using. There are providers like [Neptune Networks](https://neptunenetworks.org/) or [Vultr](https://www.vultr.com/) that you can peer with from a VPS so you don’t need to get “proper” IP transit in a datacenter.
I made a similar setup with Home Assistant running on a low power Raspberry Pi 3A+ connected to a Xiaomi Mijia Bluetooth Low Energy temperature and humidity sensor (model code LYWSDCGQ). It takes a single AAA battery and it's lasted almost a year on one.
I also have Volatile Organic Compounds / equivalent carbon dioxide (Sensirion SGP30) and Particulate Matter (Plantower PMS5003) sensors soldered up to a Raspberry Pi Zero W over I2C/UART for logging. It lasts quite a while on a USB battery so you can take it out and about. An ESP8266/ESP8285 would last longer but it's easy to log with a python script and the SD card gives you loads of storage.
I haven't written this up yet but will if anyone is interested?
Across 375 samples with LLM as a judge, mistral scores 4.32, and marker 4.41 . Marker can inference between 20 and 120 pages per second on an H100.
You can see the samples here - https://huggingface.co/datasets/datalab-to/marker_comparison... .
The code for the benchmark is here - https://github.com/VikParuchuri/marker/tree/master/benchmark... . Will run a full benchmark soon.
Mistral OCR is an impressive model, but OCR is a hard problem, and there is a significant risk of hallucinations/missing text with LLMs.