I think it includes transfers and express lines. I noticed if I click on a local stop that meets an express line somewhere, the express line stops are still quite a bit darker than other stops.
That's likely why 18th and 14th street are so similar: It only takes a minute or two to get from 18th st -> 14th st.
For me, the nice part about Obsidian is that they're just markdown files. So even if something happens to Obsidian, the notes still exist and are still easily transferrable to something else.
Fairly easily transferrable to something else. I assume that many of the plugins that power users use involve added non-standard stuff to the files. Like if you're adding a bunch of metadata to be consumed by the Dataview plugin, all that metadata might be worthless on a new app unless someone creates an equivalent plugin elsewhere.
It's open source, so they can. But people who want forward-compatibility should probably think about what their raw markdown files look like, and how useful they'd be in another program.
Personally I do avoid add-ons that create special syntax in the md files so that if Obsidian ever goes shitty I’ll have an easier time migrating to whatever alternative.
If it ever comes to needing an Obsidian replacement I'd hope it can aim for compatibility with the plug-in ecosystem, at least initially. No idea how difficult a target that would be. Either way, I certainly worry about my data in Obsidian less than I worry about Evernote, OneNote, or Apple Notes, even with a couple of non-standard markdown additions.
Since it's not VC backed I'm hopeful about Obsidian building a long-term sustainable business without having to turn shitty, but who knows. I should sign up for Sync and give them some money.
> I assume that many of the plugins that power users use involve added non-standard stuff to the files. Like if you're adding a bunch of metadata to be consumed by the Dataview plugin, all that metadata might be worthless on a new app ...
The metadata is YAML frontmatter. Works with any frontmatter aware tools, which includes, for example, most static site generators.
It's all just markdown, the plugins layer on top of Markdown.
Your site basically works if you shove it in GitHub pages. With a touch of config matching, you can open an Obsidian "vault" in the VSCode plugin called Foam (like Roam but for a local folder structure of Markdown files).
Most add ons, like, say, Excalidraw for charting, are themselves tools with plugins in GitHub Pages, Hugo, MkDocs, etc., so are portable for static site gen (SSG).
The only thing it seems to be "missing" is the CRDT style multiplayer live editing of tools like Notion or Craft.
That's the theory, but it's not like they use strict markdown. Markdown in the first place is a very simple and limited format, so everyone has their own syntax-extensions and tweaks, and obsidian is no exception. So even if you lose the data themselves, you could lose a significant amount of ability to work with them, if somehow obsidian becomes unusable.
And if you build on plugins, this sometimes happens even now here and there. Plugins becoming unusable because of an update is still not uncommon. Their developing stopping for whatever reason is also a bit of a problem.
Markdown isn’t immune to deprecation in the Obsidian ecosystem - I have files with random junk in the headers leftover from failed attempts to integrate some data management plugin or regime.
How does that change the point being made? Front matter is supported essentially as a bucket of metadata. Front matter has varying degrees of usefulness without supporting tooling.
Front matter is still a simple text based format that can easily be parsed by other programs, and there is already quite a bit of tooling to process it. Even without tooling, it’s human readable. The degree of usefulness just increases the more sophisticated the tool is that you’re using to read it.
Compare this with the painful migration away from Evernote, which left me with a folder full of giant .enex files that I can technically use elsewhere, but at this point are realistically just an archive I’ll never touch unless I really need something.
If you've invested time into adding metadata and app-specific markdown to your setup, you lose that when you move to another tool. Has happened to me before. It's about vendor lock-in when you go too deep into their ecosystem.
Obsidian is free to use, and they make their money from offering additional sync-service and good will of the users. I don't think open source would impact it much. Though, it would bear the risk of a hostile fork, maybe to include a free to sync or something. I mean the obsidian-devs have some bad history on that front.
But at least they should give some guarantee to Open Source it if there were no significant update in a while, similar to how Qt and KDE have their agreement. Or at least they should make the source available, so people can contribute or move away easier in case of problems.
I was part of Techstars Seattle in 2019, though we were mentored by Chris's counterpart at Founders' Co-op (Aviel). It was also a partnership with Amazon, and I really don't have enough good things to say about that experience even though our company ceased to exist less than a year later.
I'm only half-surprised.
I had a great experience, we learned a ton, and if anything was going to set us up for success it was our time there. Companies are mostly doomed to fail, but our time in Seattle was pretty transformative for me personally. It allowed us to refine our focus and dedicate ourselves wholly to building something people wanted (which, turns out, wasn't that many people obviously). The support and guidance we received from Aviel, our Amazon partners, and fellow cohort members were unparalleled. Say what you want about VCs or Amazon or startup founders (and yeah there are many things to be said), but I really have nothing but great things to say about all of the individuals from our time there. Admittedly, my opinion doesn't carry any particular importance.
On the other hand, I'm not surprised at all when I reflect on the actual Techstars program. Techstars, as an organization, seemed totally in the periphery for much of the program. The lion's share of valuable advice and resources came not from the organization itself, but from everyone else.
Echoing another's sentiments, the value of Techstars seems heavily influenced by location and the mentors involved. We were lucky to only be thinking about two great programs, Seattle and NYC. If we had ended up somewhere else maybe I'd be completely unsurprised.
We went through Techstars Chicago in 2016 and found the network to be top-notch. We met so many incredible people who helped connect us to resources we could not have gotten otherwise.
I think Techstars' inability to pivot, go back to basics, and learn from their mistakes all contributed to their downfall. Now, I know they're not completely shutting down, but I don't know how they'll come back from this. They spent so many years touting that they were the anti-YC. Focusing on areas like the Bay Area means they're going to be competing with other programs and resources that have been part of that community for years. It's going to be an uphill climb for them and I don't think it's going to work.
A friend and I tried to start a company around this, basically resolving three big issues we saw with take-homes:
1) No feedback. If you spend a couple hours on something you should get meaningful feedback.
2) No clear criteria. Is the hiring company going to slam you on correct syntax? If so, you should know that upfront!
3) Time guidelines that had no enforcement and thus led to an arms race where candidates would spend ever-increasing amounts of time and skew the standard of what "good" is.
So we fixed those things:
1) We provided the feedback, not the hiring companies, so legal liability was non-existent for the hiring companies. We double-blinded the process as much as we could (evaluators didn't know who the candidate was and vice-versa).
2) We told candidates upfront what they'd be evaluated on. Not down to the level of "you must implement this problem using a max heap", but we would say something along the lines of "The company is looking for an academic algorithmic solution to this problem" or similar. We would then only allow evaluators to evaluate them on these axes and nothing else.
3) We also strictly enforced time limits by basically telling candidates "hey you'll have 2 hours to submit from the time you hit start and see the prompt, so please make sure you have two hours from when you hit start." -- not ideal, obviously, but the best we could come up with to resolve #3 above.
As you can probably imagine, the market just wasn't really there for this. I think candidates generally enjoyed it in comparison to the vague, unending slog that most take-homes are but:
1) The value prop just wasn't really there for most companies: They mostly use these types of evaluations on more junior candidates, and unfortunately the hiring market for junior candidates is highly skewed towards the employer.
2) More surprisingly, we realized the time their current engineers and managers spent evaluating these takehomes just wasn't really a consideration for them. We tried to frame it in terms of "here's how much it costs you to evaluate these take-homes wrt time spent vs. us", but it was a difficult sell regardless.
We actually had the most success evaluating candidates from more non-traditional backgrounds upfront ourselves and then charging a placement fee if they were hired, but we ultimately didn't really want to continue that.
> More surprisingly, we realized the time their current engineers and managers spent evaluating these takehomes just wasn't really a consideration for them. We tried to frame it in terms of "here's how much it costs you to evaluate these take-homes wrt time spent vs. us", but it was a difficult sell regardless.
employers don’t know, or seem to care, how much it costs them to replace and/or onboard somebody. usually because they have no idea how they’re off/on boarding hires.
not too surprising that they wouldn’t care about the cost of one component of the hiring process.
i've done lots of data science take homes, and the vagueness is a huge killer. they'll hand you a dataset and give you an open ended prompt to 'analyze it'.
do they just want to see that yoiu know how to load a csv file and make a barplot? or do they want a showcase of advance statistical modeling to see where your ceiling is?
What shows like Dark and a good Korean drama have in common is that unlike many Western shows, they don't run past their prime and outstay their welcome.
Korean dramas by default are 16-20 eps and end there, and if they're done well, stick the landing and provide a satisfying ending to a complete, pre-planned narrative arc, which Dark also managed to pull off.
> a good Korean drama have in common [...] they don't run past their prime and outstay their welcome
For me, it doesn't feel like it though. I got tired of k-dramas (and I usually watch the 16 episode ones, instead of 20), because it feels that around the 12~13 episode mark they throw in an unnecessary twist or something, just to get to the 16 episode mark. I do feel like they would be miles better if they usually aimed for 12~14
Heh I vaguely recall at Etsy, predating my time, that a significant amount of business logic was done using stored procedures and triggers.
They migrated away from it at some point, but some of the people who handled that migration were still around when I was there. Didn’t sound fun at all, sounded like a horrific nightmare.
This was a constant complaint about Etsy 10 years ago when I worked there. This was a complaint that predated my time working there, even. At one point we even had to ban people selling spells (yes, as in the seller would offer to do a magical incantation on your behalf).
The exact details might've changed, but the symptoms look the exact same. I remember we had to deal with a lot of people trying to skirt OFAC regulations, for example, by doing all sorts of things to hide the country of origin for something.
Optimistically I will say this is a very hard problem to solve. Pessimistically of course the company benefits from it, so I'm sure it's not a problem anyone besides maybe a few diehard advocates look too deeply into.
I mean sure, I think that's why it flew under the radar for so long. The dark underbelly of it was people buying love spells and the like, which starts to feel a lot like preying on peoples' desperation.
I find that genuinely fascinating from an ethical standpoint.
What if the people really believe spells work? Does it matter that they don't? Is it just the ephemeral nature of spells? What if I were to ship a complete spell in a bottle to the person?
(not to be too much of an edgelord) What about religious iconography or religious paraphernalia? I would argue that anything related to that is just a more accepted version of magic spells. Would prayer be banned?
For an e-commerce platform that allows returns: Yeah that's kind of an important sticking point and was a big part of the problem.
The other problem was that spells were effectively a service and you can imagine that we didn't want to get involved in determining the validity or accuracy of services being performed virtually that had zero tangible artifacts.
> What if I were to ship a complete spell in a bottle to the person?
This is probably what people actually do now to get around it, since there's an actual item being sold now.
Given that I am an occultist and a divinatory reader (tarot, etc), I see no issue with selling spellkits as goods to use. I also see no issue in selling spellcasting or divinatory services either. But I also acknowledge significant problems with companies allowing these to be sold and impossibility of validation or "returns".
Comparatively, selling spells can be quite easily compared with a Christian baptism, communion, the Catholic sacraments, prayer. The alms (baskets to collect money) is the cost for accessing these sorts of spells. Some Christian churches demand 10% of your earnings, which is way more than what some spells would cost.
Christians dont have normalized divinatory methods, so I'm unable to compare those. Their deity, YHVH, expressed a ban on that for most anybody - there exists a type of divination usable only by his high priest in Judaism while wearing the appropriate vestments, and only in yes/no questions.
From the occult community, what we see with "spell-selling" and the like is that it does attract frauds who are in it just for the money. Naturally, some here will consider any sort of esotericism to be a "fraud" - to those, nothing I say will be of any use. But the issue here is "How do you know if you're getting what's claimed?"
In order to know that you're buying a legitimate spell, the only way to know its true is if you can see and work with those energies initially... Thus meaning you would be able to do it regardless. Or you don't have those abilities developed and thus must trust who sold you the spells.
Of course, from a company's viewpoint, there's no way to validate spellcasting at all. Anybody could claim "fraud" and there's no defense against that. So it's easier to not allow these types of sales to happen. Effectively they're more problems then they're worth. (And doing a simple cursory search shows no christian prayer services being sold either... but that's easy to do with the abundance of churches in the USA.)
Edit: I'm really tired of having a discussion here, sharing my personal experiences, and having it drowned out with -1's. You may disagree with the occult. But it definitely is an area that Etsy had to deal with.
I guess the -1's are "I disagree with your lifestyle". Quite closed-minded, honestly.
Perhaps it has nothing at all to do with people disagreeing with your religious choices, and more that you posted one sentence in response, followed by a long off-topic screed that includes simplistic criticism of a stereotype of other people's religions?
Etsy generally doesn’t go into selling services IIRC, and a lot people selling this stuff on Etsy sell “I will do a spell/psychic healing/remote reiki/whatever”, which is a service.
Also, even in the case of having the parts of a spell shipped to you, how do you do a payment dispute when the spell doesn’t work?
And where would you even go to buy spells in real life
There's places in pretty much every major city, and even in some surprisingly rural places. Even in ritzy neighborhoods, there are private and public palm readers and tarot card shops catering to the upper crust who will sell anything they can get away with.
In places like New Orleans there's an entire tourism sector for it.
I've "prematurely" split up code across multiple/classes functions so many times before these huge god functions became set-in-stone and unwieldy.
I had a co-worker once who was extremely heavy on YAGNI--and I don't mean that as a pejorative, it was a really helpful check against code solving non-existent problems. They once called me out on a refactor I'd done surreptitiously for a payments flow to split payment states into multiple classes. Serendipity had struck, and I was able to point to a submitted (but not accepted) diff that would've broken the entire payments flow (as opposed to just breaking one state of the flow).
I always think about that every time I question whether I'm isolating code prematurely.
* If breaking up the code makes it easier to understand with its current functionality then absolutely go ahead.
* If breaking up the code makes it slightly less easy to understand right now, but will make it easier to add some feature later that you know you know you're definitely going to need, then hold off.
Most of the time you "know" you're going to add a feature, you turn out to be wrong, and (worse than that) some other slightly different feature is needed instead that's actually harder to add because you broke up the logic along the wrong axis. Whereas, breaking code up purely based on what makes it easier to understand now, ironically, usually does a much better job at making it easy to add a feature later - it often almost feels like an accident (but it's not, really).
I bet your payment flow refactoring made things a bit easier to understand, even if it had never been worked on again.
> If breaking up the code makes it slightly less easy to understand right now, but will make it easier to add some feature later that you know you know you're definitely going to need, then hold off.
This. Your code shouldn't be documenting what it will do in the future: it should be clear what it's doing right now. Otherwise some other engineer maybe from another team will come upon it, misread it, and bugs ensue. Trying to save yourself time in the future ends up wasting time for others, and likely for yourself as well.
This relates to "Rule of 3 Before Abstracting": One concrete instance of a proposed abstraction is YAGNI (adding an abstraction blurs the concrete code and makes it harder to understand), two concrete instances are coincidence (it's maybe a sign that you need an abstraction between the two, but you don't have enough scientific data to consider), at least three concrete instances is (finally) a pattern (abstracting a known pattern is right and good and because it becomes a document of the high level pattern itself and the differences and distinctions between the concrete implementations become more distinct and obvious).
That "Rule of 3" is sometimes a good reminder itself not to build for some possible future of the codebase, but to build for the patterns in your code base as they exist today.
I hadn't heard of it but it's a good rule! There are too many examples of code that was abstracted out to be shared between uses that end up evolving independently, and the abstraction ends up with a bunch of if-statements and confusion.
Some notable exceptions are things like addresses where many times users have multiple, but usually people splitting up the user table and the "user profile" table are just causing headaches. A wide user table is fine.
Premature method or function isolation: Usually good.
Even if there isn't reuse, the code is usually more readable.
def let_in_bar?
return self.is_of_age_in_region? and self.is_sober?
Is a perfectly fine isolation with very little downside.
I know your example is just a toy, but it definitely reminds me of heavy handed over-refactoring into tiny one-line functions.
There are two problems with that approach:
The first is when you're trying to understand what a single function does. If it's 20 lines long but has self-contained logic, that's often easier to understand than referring back to the tiny pieces it's assembled from (even if they're supposedly so self contained you don't need to look at them individually - that's rarely true in practice). On balance, even if a function is a slightly too long, that's usually less bad than one that's in slightly too many little pieces.
The second is when you're looking at the file level. When faced with a file with dozens of tiny functions, it's much harder to get a top level understanding than if it has a smaller number of longer functions.
The second one is the bigger problem, because understanding a project at the file (and higher) level is typically much harder than understanding what a single function does. Even if breaking up a function into smaller pieces does actually make it easier to understand, but makes the overall project harder to parse, then that's usually a loss overall. Of course, it depends very much on the specifics of the situation though, and certainly a long function broken down into logical pieces can actually make the overall project easier to understand.
It's relatively rare to refactor a single line of code into its own function, unless it's really hard to read, e.g. some nasty regex.
> even if they're supposedly so self contained you don't need to look at them individually - that's rarely true in practice
If a long function is broken up into smaller functions only for the sake of having smaller functions, then you end up with functions that don't really make sense on their own and, sure, the bigger function is better. But if the broken up functions are named well enough like in the example above, then it shouldn't be necessary to see how it's implemented, unless you're tracking down a bug. After all, it's very rare to look at e.g. how a library function is implemented, and for some libraries/languages it's not even possible.
> When faced with a file with dozens of tiny functions, it's much harder to get a top level understanding than if it has a smaller number of longer functions.
Most languages have facilities to help with that, e.g. function visibility. Your smaller number of large functions can remain the public API for your module while the "dozens" of tiny functions can be private. In either case, long files are harder to take in no matter how many functions they're composed of.
> But if the broken up functions are named well enough like in the example above, then it shouldn't be necessary to see how it's implemented, unless you're tracking down a bug.
(Emphasis added.) Yeah, exactly. Tracking down a bug is one of the times that code clarity is most important, and over eager abstraction is most annoying.
This whole story is wild to me, having worked previously at a different company which was also a frequent target of criticism on the blog run by the victims.
Sure, we definitely used to eye-roll at some articles. But my ass would've been fired in 5 seconds for even sending them an email, let alone anything beyond that.
My guess, though it carries a bunch of other problems that people below listed:
Population density means zip codes are extremely concentrated in NYC, especially in Manhattan. Traveling down the length of Manhattan might have you cover ~25 zip codes in 13 miles. Basically, a zip code isn't a meaningful measure of distance like it might be elsewhere. A radius might also place you into NJ, which could be an issue depending on what you're trying to accomplish: Delivering something from Queens might be significantly easier than from NJ, even though Queens might be further.
That's my guess anyway, though it's odd because nowadays generally looking something up via zip code + radius is actually easier in Manhattan (with a map you can easily see whether it would require coming from Jersey, crossing the park, etc. etc.)
That's likely why 18th and 14th street are so similar: It only takes a minute or two to get from 18th st -> 14th st.