This quote keeps on showing up out of context. Edsger and I got along quite well. He loved to be the way he was and pushed it. His friend Bob Barton (the great genius of the B5000 design) was very similar in loving to push buttons and being very critical of the not-quite-a-field that we have. When the U of Utah faculty complained to Dave Evans about Barton, Evans said "We don't care if they're prima donnas as long as they can sing!" This was a general attitude back then. Dijkstra was much more funny than annoying for anyone who had any sense of self. The two biggest problems in our not-quite-a-field were those who listened to him too carefully and those who didn't listen to him carefully enough. In any case, in the research and faculty worlds of the 60s -- where the funding from ARPA was very very different than funding is today -- consensus was not very important: really capable people explored without worrying so much about what others thought (this is hard to explain to people who've grown up in the intensely social and identity seeking world of the last 20 years or so). His comment about OOP (a rather different kind of thing at Xerox PARC than what the term means today) did not bother any of us at all -- it was rather a compliment even (which was also not a big deal) for those of us who liked the way California worked in the 70s. His comments helped when they helped, and they mattered not at all when they didn't.
My favorite feature is "video-sync=display-resample". This gives you perfect frame pacing by synchronizing the video to vertical refresh and resampling the audio to match. If you set your refresh rate to an integer multiple of the frame rate, every frame will be shown for the exact same time. The audio resampling compensates for very small discrepancies between speeds by imperceptibly changing the audio speed. It's similar to the proprietary DirectShow filter "ReClock", but AFAIK ReClock does not dynamically adjust the speed to compensate for clock drift like MPV does, in which case you could theoretically still get imperfect frame pacing. (I haven't actually used ReClock so I could be mistaken here.)
Also great is MPV's support for arbitrary OpenGL shaders. This can include very complicated shaders, e.g. real-time neural network upscaling:
By specifying the SSID, encryption type, password/passphrase, and if the SSID is hidden or not, mobile device users can quickly scan and join networks without having to manually enter the data. Note that this technique is valid for specifying only static SSID passwords (i.e. PSK); dynamic user credentials (i.e. Enterprise/802.1x) cannot be encoded in this manner.
Order of fields does not matter. Special characters """ (quotation mark), ";" (semicolon), "," (comma), ":" (colon) and "\" (backslash) should be escaped with a backslash ("\") as in MECARD encoding. For example, if an SSID were "foo;bar\baz", with quotation marks part of the literal SSID name itself, this would be encoded as: WIFI:S:\"foo\;bar\\baz\";;
- For your familiar team, during any verbal communication, take time to restate your position and why brevity may be in your email. Remind that text doesn't always convey tone well, even more so when writing quickly first to establish that it's never personal (and make sure it's never personal)
- Practice figuring out exactly what you want someone to take away from your email and figure out how to effectively reduce your content. Don't bother with automated tools on this, they're all awful and just introduce more confusion -- you need to practice this and see what works with different types of communicators.
- Size up who you're writing to; if they're verbose, likely they think verbosely and you'll need to pick out the elements you need to expand on. For more concise speakers, use specific terminology (they will figure it out)
- Use white-space heavily to draw attention to important topics. Save highlighting/bolding for only the items that are absolutely essential they're understood (e.g., "Don't do this you'll break everything", "we must absolutely not do this because of X." and expand on X a bit more) A simple line break makes it easier to read
- Judge your audience and write an appropriate response -- e.g., Sales doesn't need to know why a bug is taking so long or why you aren't fixing it; you can take the time to explain, but ultimately either:
a) what you write will just be copy/pasted to a client who also won't understand it
b) your contact will muddy your idea anyways when trying to rephrase it
Focus instead on what needs to be understood and save the details for those who need it
- End longer emails with an enumerated list of action points; if they take anything away from your email, make it that, and people will typically respond in kind to keep emails fast and short
- Drop the corporate signature -- no one cares or likes the sigs with dozens of images about certs/awards no one has ever heard of and it clutters communication
- Don't be afraid to outright ask for a TLDR -- don't be rude, just say something like "I'm having trouble following this; can you provide a more concise summary that clarifies these points for me: (bullet list of things you need to know to take action)
- Identify in advance stuff you can automatically delegate to your trusted team and make a distribution list for this. Require people go through the list. You can speak much more concisely to those who know you because they understand what needs to be done, so once it lands, just fwd it to your team @'ing the relevant person with brief instructions
- Reports are for start of day/end of day; filter out the important ones and be very judicious here. Reports should serve you, not the other way around. If you're spending more than an hour with reports, likely you have too many or you need to revisit the purpose of the report
- Random emailers get deprioritized always. If you cannot figure out what it is you want in a few seconds of reading, ignore it or if you sense there is some actual importance, a quick email asking for the polite TLDR
- (Technical items only): unless it's a specific issue you're working on currently/frequently, practice filtering out and just mentally categorizing "we have issue with X" so that if you deal with X later, you know just to check this at that time. Such items are good to have a note on, but you don't need the specifics until it comes up.
- Don't let randoms use chat with you -- they must go through the filters and email no matter how important it seems to them
- Prioritize your team and make sure they understand they have your attention -- even a quick "I've seen this, I need time to respond though. Meetings till HH:MM" is enough (remember to stress to your team brevity is never ignoring)
- Corollary to the above, let your team know how to get your attention for important stuff, and then respect those methods. This is important to keep the flow of communication good
- Turn off the automated responses from Outlook/Gmail, most of the time they're useless and miss the context entirely, and will just end up with more email
- For stuff you do need to write out frequently (instructions on accessing something, process, etc), start documenting it carefully and send links with a quick polite note like "hey! We actually have this documented here in full. Give it a read and let me know if you have questions"
- Autohotkey can be used for other such items that need to be done repeatedly
- Take control of the Subject line once you own an email and make the email fit your filters. Very few people will care, and it only makes your life easier
- Practice typing. I used to play a lot of online games so got used to fast typing, but there are lots of typing games that are fun (Typing of the Dead is not only funny as heck, but it's pretty good, I unironically recommend it)
- Practice objectivity and removing accusatory statements and inflammatory ones. "You did this, here's why it's wrong" feels personal and you get defensive emails as a response. "Looks like X is the current state, we need to get to Y, here's the path I propose" does the same thing (teaches them the right way, corrects the state), but avoids the backlash
I could go on as I have a LOT of thoughts on this, but ultimately it comes down to figure out what you actually want to say, avoid personal attacks, delegate when possible, document items you frequently repeat, make the conversation fit your filters
The gist (might work better for fiction, unsure): write a single sentence summary, then, three sentences to describe what major things happen, then fractal it out - keep expanding. This way, from the start, you have a cohesive story with a planned-out arc.
> Lesson #3) Anyone can sign up for Public Access to Court Electronic Records (PACER) to search and retrieve your own legal documents [...] getting them yourself at $0.10 a page.
If you do so, you can also use RECAP: https://free.law/recap. It's like the internet archive for PACER documents.
It's a browser extension that helps create a crowdsourced archive of the PACER content. You can then search that archive (this time for free) in https://www.courtlistener.com/recap/
For stuff not on RECAP, then PACER can be used, and fee is waived if less than $30 that quarter according to https://free.law/pacer-facts
There's a lot of useful command-line tooling for KVM- and QEMU-based virt. Here's a small selection of them:
• virsh — This[1] is libvirt's shell interface; and gives you access to the rich set of libvirt APIs.
• virt-builder — Use this for rapidly building minimal or customized virtual machines; it's greatly flexible; check out its man page[2]. And here's[3] a quick example that connects both virt-builder and virsh together.
• virt-install — Use this if you don't like the default build of the template images from virt-builder; it lets you create "headless" servers via 'kickstart' and Linux OS trees from the command-line.
• guestfish and libguestfs suite[4] — This rich set of tools help you in a variety of use-cases: repairing your broken disk images, editing, cloning, debugging disk images, and more. It has saved my behind a lot of times.
• qemu-img[5] – This Swiss Army knife lets you powerfully manipulate disk images (QCOW2, raw, et al) offline. Example operations include: create images, backing chains, offline snapshots, disk image merging, ability to convert disk images from one format to another, and more.
Have you considered using something like Expounder (https://skorokithakis.github.io/expounder/) in your posts? (Disclosure, I made it but it's a small open source lib).
I don't see why we can have full-blown web apps but our text needs to be very specifically just text these days.
I've started using draw.io [1] for diagrams I want to embed in markdown files. The cool feature about draw.io is that it can embed the data structure describing the diagram in a png file. So you get a PNG file that is both source code for your diagram (so you can edit it later), as well as the presentation (you can embed png files and they will render fine in Github hosted md files).
Then I use the markdown-images package [2] for Sublime Text, and I can see those png images in my markdown files in the editor as well.
Benefits of this approach:
* Powerful diagram editor, free to use.
* Editable diagram and embedable image in one file.
> So there is a market gap between so-so webcams for $100-200 and a full-blown setup with a mirrorless camera...
Don't know where you can buy a readymade one. However, if you don't mind DIY, try our free software project showmewebcam. It uses a Pi and its HQ sensor and some software glue to make a USB webcam [1]. You'll have a wide selection of affordable lenses [2] and cases [3] that people cook up for their personal use. It's so much fun experimenting with them for different use cases.
Last time I commented here, there have been criticisms about the quality of the lens that the Pi foundation offer. We have discovered many other decent alternative lenses that help remedy the quality and distortion issue of the stock lenses. An example of a good accumulation of knowledge as we have more users and people paying more attention is the commonlands lens guide [4].
The software is very actively developed and we have a pretty supportive developers community. We try our best to have good software engineering practices so we can maintain this project in the long run. The software is designed to be modularized. It is easy to understand, build, and improve upon. I have a lot of fun building it - in fact I just finished a 5 hours coding session to address comments on the Pull Requests that I started earlier. I hope eventually it's not just another pi project for fun, the firmware has the potential to make this solution more powerful than the best webcam that money can buy, just like how openwrt is for routers.
I still have yet to record a decent demo video to demonstrate the power of the Picam but there is just too many things and too little time to get it done. Oh well...
But they're not! At this point I think we should just scrap the word privilege and call it "shit I don't have to deal with because I'm X" since it's apparently too politically charged to have a discussion about it.
Just one example.
As a white person I have never in my life cared about getting a receipt from any store I shop at, have ever been accused of shoplifting, have never had the self-checkout person have even the slightest suspicion even when I was broke in college and ringing up way too many bananas, and when the sensor things beep the workers apologize and tell me to just go.
This is not the experience of black Americans where children have to be taught to always get and keep their receipts because they get stopped so frequently.
If you are a linux user and own a nice camera you can use gphoto2 and ffmpeg to create a virtual camera. I posted howto on HN couple of days ago[0][1], here it is for anyone who might need it. I tried it with both Sony RX100VA and Sony A7III, in both cases it works really well.
edit: forgot to mention that this works over USB, you don't have to pay crazy markup for capture card
edit2: (because I'm so excited about getting this to work) here is a list of supported cameras[2] - sadly I was not able to get GoPro Hero 6 to work.
I just recently started learning Swahili and found some amazing classes from Language Transfer[0]. (I have no connection with them.) I've spent a lot of time learning languages, including tricky ones like Mandarin, and this is by far the best learning method I have ever come across.
The "class" takes the form of a dialogue between teacher and student where the student is an actual learner unfamiliar with the language and you try to learn with them. Each new word is introduced with a short but thorough explanation of how it fits in with everything you have learned so far and sometimes a mnemonic or story to help with recall. You are always prompted to try and construct new sentences yourself with the new words, without being told how, so that you are naturally exploring the grammar and idiomatic structure of the language and building an intuition for it. You are also prompted intermittently to recall previously learned words.
Within just an hour or so, I felt comfortable creating many sentences. I think this style of teaching helps build confidence in the language quickly, which is really important for making progress. They only ask for donations for their content and all of it is on soundcloud and youtube. I really recommend trying it out.
I'm convinced that almost nobody is good at the whole cycle of creating/maintaining something.
Some people are great at coming up with new ideas but quickly bore with the implementation. Some people can relentlessly improve on an existing thing but can't come up with the initial idea. Some people are great stewards of an established program but don't thrive in the chaos of rapid iteration.
I think instead of trying to mold yourself into something you're not naturally good at, you should try to figure out what you are naturally good at and build a team around it to support you.
I'm speaking in broad strokes of course, but reading your post, I think you are just not going to be a sole proprietor. You need a team member who can catch your early enthusiasm and then help see the project through to completion.
You need a finisher. Not every starter is a finisher, and not every finisher is a starter, and not every finisher is a good maintainer, either. They're different things.
I took their Self-driving car Nanodegree and it was hands down the best online course I've ever taken in my life (and I took some crazy courses from MIT and Stanford). Where else could you end up with a capstone of actually programming a real self-driving car that is then running your code on some course in Mountain View?
Offerings right now:
1. If you want academic rigor only, go with edX
2. If you want academic rigor and some more approachable/practical courses, go with Coursera; or if you want to do full MS/MBA/MSA online (they have the widest selection right now comparing to edX/Udacity)
3. If you want unique practical skills in hot areas like self-driving/flying cars, deep (reinforcement) learning etc. without fully committing to rigorous practices, go with Udacity
4. If you need anything else, go with Udemy, Masterclass, School of AI and similar
My wife & I used to use some of these services, but switched over to the following process:
1. cookinglightdiet.com for recipe ideas & meal planning. They are typically quick, simple, and healthy recipes. Costs $4 / mo
2. Generate shopping list on that site.
3. Populate my shopping cart on the grocery store's site - we go to Ralph's typically. Pick a time to get the groceries. That service of theirs is called Clicklist.
4. Pick up groceries at the designated time ($8 fee).
Picking meals is easy, generating the shopping list & filling the cart takes a little bit of time, picking the groceries up takes minutes (way better than spending an hour and a half searching for ingredients yourself). It's like normal grocery shopping some of the perks & convenience of meal kit services. Credit to my wife for coming up with all of this.
Thanks for mentioning an amazing book of literature. The Master and Margarita is my favorite fiction book! I've read it in two translations and I prefer the Burgin & O'Connor to the Glenny, but both are great.
Everytime I read it I gain more insights. I absolutely recommend reading this book alongside a readers guide which gives more background and depth, there are many biblical, historical, and author-related references that won't be understood otherwise. The author's own life is massively relevent to the events of the novel. I recommend this guide:
I still find it hard to write maintainable multi-threaded code for macOS or iOS [...] without having to worry about the minutiae of parallelism (threads, semaphores, locks, barriers, etc.).
I find this surprising, as GCD does insulate you from that low-level stuff. When you need to work with mutable data, just create a dispatch queue, and only ever access the data by dispatching a function to the queue. Both Swift and Objective-C have friendly syntax for anonymous functions that makes this lightweight and easy.
I used the Onsi Haiku over on the equivalent HN thread on Heroku's blogpost:
Here is my source code.
Run it on the cloud for me.
I do not care how.
The gist is that Docker containers are awesome for the Day 1 experience. I write a Dockerfile and I'm off to the races.
But then Day 2 rolls around and I have a production system with 12,000 containers[0].
1. What the hell is in those containers, anyhow?
2. A new CVE landed and I want to upgrade all of them in a few minutes without anyone being interrupted (or even having to know). How?
3. I have a distributed system with many moving parts. I build a giant fragile hierarchy of Dockerfiles to efficiently contain the right dependencies, making development slower. Then I snap and turn it into a giant kitchen-sink Dockerfile with the union of all the dependencies in it. Now production is slow as hell.
4. Operations become upset about points 1-3. Now I can only use curated Dockerfiles, can only come through our elaborate Jenkins farm, rules rules rules. Wasn't the purpose of Dockerfiles to make this all just ... go away?
Buildpacks solve all of these. I know what's in the container because buildpacks control the build. I can update CVE flaws in potentially seconds. Each container can have what it needs - no more, no less.
And most important: the buildpack runs locally, or in the cluster, exactly the same. It's all the developer benefits of Dockerfiles/docker build, minus most of the suck.
For those interested, I recommend reading how FIDO U2F works. There's more in a security key than just FIDO U2F, but FIDO U2F is easily the most ergonomic system that these security keys support. Simplified:
* The hardware basically consists of a secure microprocessor, a counter which it can increment, and a secret key.
* For each website, e.g., GitHub, it creates a HMAC-SHA256 of the domain (www.github.com) and the secret key, and uses this to generate a public/private keypair. This is used to authenticate.
* To authenticate, the server sends a challenge, and the security key sends a response which validates that it has the private key. It also sends the nonce, which it increments.
If you get phished, the browser would send a different domain (www.github.com.suspiciousdomain.xx) to the security key and authentication would fail. If you somehow managed to clone the security key, services would notice that your nonces are no longer monotonically increasing and you could at least detect that it's been cloned.
I'm excited about the use of FIDO U2F becoming more widespread, for now all I use it for is GitHub and GMail. The basic threat model is that someone gets network access to your machine (but they can't get credentials from the security key, because you have to touch it to make it work) or someone sends you to a phishing website but you access it from a machine that you trust.
S3 - not much to say, fast, durable, expensive...the gold standard. Given limitations of below, we use for rotating nightly backups despite cost.
Glacier - great for cold storage/archive, but has 90 day minimum
OVH hot - open stack based, cheaper than S3 but not absurdly cheap, charged for egress even intra-DC which is absurd and kills many use cases. They have crippled OpenStack permission management (i.e. no write-only keys with lifetime management per bucket which is necessary for doing backups securely)
OVH cold - charges for ingress but then storage is crazy cheap, and egress not as bad as Glacier. This is our preferred archival option.
C14 - not object storage, more like a "cold" ftp dump
B2 - pricing is epic, S3-incompatibility is a pain and lack of Backblaze-sponsored libraries (the library in the python b2 cli is not a proper API)...we've been working on adding B2 to WAL-E. However, their permission/user management doesn't cut it.
Wasabi - S3 compatible, great pricing if not for 90 day minimum, which they hide in the fine print
Coyotes are too clever because they know that people shaking jars full of coins can’t hurt them. Thus the animal control patrol has to get called and when they don’t shoo, the animal control person who loves animals has to shoot the coyote.
Coyotes are winning the mini-game of each human interaction, but they are losing the meta-game of what society will do if coyotes aren’t scared.
Personal Connection
This reminds me of a turning point that I had in high school. When I was young, I would get in trouble and try to get around the rules each time I got in trouble. /“Well, technically…”/
But at some point I realized that most of the time you aren’t getting in trouble because you are breaking the rules. You are getting in trouble because you are making the rule makers unhappy. Once I had that realization I was able to focus on relationships with the rule makers and figure out what they actually cared about. This allowed me to break the rules just as much but without getting in trouble.
Having followed Nuitka since it started, I can offer my perspective:
- Before Nuitka, someone already did a "Python->C" compiler along similar lines: translate the Python source code into the C calls that the interpreter will make, eliminating any interpreter overhead and providing C compiler optimization opporunities. That thing sort-of-worked (with v1.5.2 IIRC), but was cumbersome to use and delivered a meager 5% performance improvement for the cases it did support; it was abandoned.
- Nuitka's plan had the same thing as a starting phase; people told the Nuitka guy that he's wasting his time based on prior experience. When he actually delivered a mostly-robust working version (much more usable than the previous attempts ever were), it indeed delivered only a small performance gain compared to CPython.
- As a result, it seemed like the community believed both that the whole thing is futile, and that the developer is fighting windmills.
- a lot of time passes, Nuitka keeps improving with better analysis, translation, compilation, etc - but the community has already cemented its opinion.
- Nuitka remains a useful magic system known to few.
I would say that the early Nuitka versions (and the prior attempt) gave it a SEP field that has never been lifted, and short of e.g. DropBox or Facebook adopting it, nothing will lift it either.
I’m on my 4th Thinkpad T-series - T520, T530, T450s, T460s - each one was a a winner. I ditched the T5XX series when they borked the keyboard layout by adding a numpad. Used to run Ubuntu, now I run Debian, stable or testing depending on point in release cycle at install time. I plan to take another look at Ubuntu now that they gave up on Unity. A coworker is happily on the T470s (first USB-C in the series). I always get 1920x1080 since my eyes are accustomed to it, but multiple coworkers are happy with 2560x1440. Used to get the Nvidia cards, now very happy with the integrated Intel graphics. In general, last year’s hardware requires almost zero messing with Linux to make everything work, whereas with the latest hardware, be prepared to solve a couple minor issues. Ubuntu’s font rendering or Infinality are both amazing and better than macOS or Win10 to my eyes.
I’m ridiculously excited to eventually upgrade to a T480s because it’s the first in the series to offer a quad-core CPU. They’re selling the quad-core with Intel graphics which is exactly what I want. I hope Lenovo did a good job with the thermal engineering...
Thanks to all the open source developers that deliver this totally rad experience on Linux, Debian, and Gnome <3
I've tried Feedly but my goodness, the corporate language is such a turn off. I just want a feed reader, I don't want to "Fuel your team's success," "Enrich articles with unique insights," "Automate your content workflows," "Reinforce your digital brand". These features may be cool and useful, but the way they're sold makes me gag.
I used my ad blocker to remove the "Well done!" that appears when everything is marked as read, it's so patronising.
Now I use rrss (https://github.com/pmarinov/rrss) because it just reads feeds and doesn't constantly promote itself with management garbage.
Hey everyone, if you're looking for a fantastic blog engine, I want to tell you about Ghost: https://ghost.org
I found Ghost while looking for a new blog engine for my company. My first instinct was to use Medium as well, but I ended up hitting a lot of issues because it was impossible for us to customize it (add custom javascript, charts, disable the lame "highlights", custom domain was not free, etc...).
1. Ghost is open source, developed by a foundation in the open. They make their revenue open, their issue tracker, their code, everything.
2. Ghost is self-hostable, as well as hosted on a paid plan. The paid plan is a little pricy (https://ghost.org/pricing/) but I recommend it if this is for a company blog.
4. Ghost has an excellent featureset. It's powered by Markdown and has a wonderful markdown editor in its admin/authorship interface. It also supports authors, editors, contributors, drafts, publishing schedules, tags, etc.
Seriously, try it out. I'm not affiliated, just a huge fan. I want to see more people use and support these fantastic open source tools, rather than complain about Medium like there's no alternative.