Where's the social engineering part? Trying to convince people you aren't a scumbag so they become your friend long enough for you to swipe their password?
Actually, they don't use Slack because Slack would be the most expensive piece of software they license. The enterprise model from Slack is so prohibitively expensive it's like they're intentionally trying to drive away large customers.
I had an extension a while ago that I was attempting to publish to the Firefox app store and it was rejected on grounds of using eval. I don't remember why I needed to use eval, but basically this is something they do already. I'm guessing that previously they were allowing for an Angular exception.
These things aren't pain points in the browser the way they are in Node. I have never felt the need for an ORM in the browser. I have never dealt with client-side code that was using so many libraries I had to worry about whether exceptions would be handled via exceptions, or the first argument of the callback, or rejected promises. No one (at least, no one I know) is installing node modules like isArray to use in the browser.
Yes, these things _could_ technically apply to the browser, but it's not commonplace. In the node world, these are all things you deal with consistently.
> No one (at least, no one I know) is installing node modules like isArray to use in the browser.
Erm, well, considering the adoption rate of Browserify and Webpack, I'm going to disagree with you there. Especially considering React's momentum practically requiring some form of module builder.
Startup speed is terrible with that approach, each browserified module adds like 10x more code as boilerplate than something like isArray implementation would take. Then every module dependency is resolved dynamically at runtime, which will also quickly become a performance problem even when you aren't using micromodules.
I haven't touched browserify, but with webpack what you said could not be more false...
Webpack doesn't bundle at runtime, it doesn't add any amount of code overhead per module that I can easily measure, and it doesn't trash startup speed.
Same goes for browserify. You can have lots of tiny modules and (watchify especially) is fast to re-build. And yeah perf of 'require' statements at run-time is a non-issue.
It's not really the same thing as an acquihire. In an acquihire a company acquires another company and then chooses which employees to give job offers to. In many cases only a handful of top employees are actually hired, the rest are fired. Stripe is saying that it wants to try to hire whole teams.
I don't think this is the same thing. If you want to hire the vast majority of people, it may well be easier to acquire the company. What happens if the other company is much bigger than you though? Stripe would never be able to buy Google, Microsoft, Apple etc., but they could potentially hire a small internal team from them.
Acquihires don't necessarily involve the entire team. I've certainly heard of people who were on the wrong end of "We'd like to buy your company and offer jobs to n-1 of you."
At a previous company, we initially set everything up at Heroku. As things got cost prohibitive, we moved them one by one to AWS. Since Heroku is setup on AWS, it was easy enough to connect to our Heroku postgres instance from AWS web servers and it doesn't add any extra latency vs. running direct on Heroku. Over time, we slowly migrated everything off to our own infrastructure on AWS, but we were able to leverage Heroku to avoid all of that development effort until it was cost effective to do so. You'll need to deal with your own SSL endpoint if you run your web servers on AWS, but it's pretty trivial to do so with ELB.
Out of curiosity: does Heroku give you a better deal as you grow? I know for a fact Salesforce will discount their licenses more and more and you buy bigger quantities and sign up for longer contracts. It'd be surprising if Heroku doesn't do this at all...!
It depends. I tried to switch to Simple, but at the time I was working as an independent contractor. The maximum size of a check you can deposit without mailing it to them is $3k. Having to mail in every check I received was a hassle that outweighed any benefits Simple provided over a regular bank account.
For what it's worth, last time I was doing independent contractor work, I used a billing firm, MBO Partners. They took 4% of my gross and treated me as a W2 employee. My taxes would come out of the gross, as would any benefits I elected. The rest would get direct deposited like a normal paycheck. I would have spent more than 4% of my time screwing around with paperwork, so I thought it a pretty good deal. I suspect there are other firms like this, but I liked 'em enough that I used them for 8 years or so without ever looking for alternatives.
I'm guessing MySQL doesn't support this (hence the need for a cron job), but postgres lets you set a statement_timeout on the connection. It will force kill queries that go beyond that timeout. I worked on an app not too long enough that occasionally would have some queries go off the rails and start blocking everything. We set up postgres to just kill off anything taking 30s automatically, and then were able to root out the issues without worrying about everything blocking on these broken queries and taking down our systems.
Author here. We thought about using statement_timeout, but we didn't like the lack of flexibility when we do have long-running queries that aren't necessarily deleterious to performance. Instead we opted to use two different users ("cron" for long running jobs, and a normal read/write user pair for normal) that the long query killer script will kill at different timeouts, effectively implementing a per-user timeout rather than a global timeout.
I bet the best approach might be to have the statement_timeout be the largest of all of your per-user timeouts (in case your watchdog script fails, can't connect, etc. for whatever reason).
I've not tried it, but according to PostgreSQL documentation, this should be possible (and recommended) to set on a per role basis?
> The ALTER ROLE command allows both global and per-database settings to be overridden with user-specific values. (...)
> The SET command allows modification of the current value of those parameters that can be set locally to a session; it has no effect on other sessions. The corresponding function is set_config(setting_name, new_value, is_local). (...)
> Setting statement_timeout in postgresql.conf is not recommended because it would affect all sessions.
If that's set on the connection, could a misbehaving/poorly-written client still give you trouble (obviously it would be better to not have any other teams connecting directly to your DBs, but have to you work with what you find in place, sometimes...)? Or is this something specified at the DB side of the connection, not the client?
My practice with long-running query killer scripts is to have them ignore queries that are known to be long by running those queries on a specific user/machine(s) and then hoarding and protecting those credentials.
Do you doubt your skills? Why are you interviewing if you're employed at a good company?
I hear this position often when people talking about contract-to-perm, and I just don't see it. If you're good at what you do, and people enjoy working with you, why wouldn't you take a contract to perm position? The only reasons I can see are that you are afraid you don't get the job (well, then you wouldn't have been a good fit if they hired you) or you end up not liking the team/company as much (same as if they had hired you.)
So, why wouldn't you? And I don't buy the 'I have a good job' line, because you wouldn't even be entertaining offers if you weren't interested in leaving your current role.
> you are afraid you don't get the job (well, then you wouldn't have been a good fit if they hired you)
Or, you rubbed the wrong, politically-connected person the wrong way. Yes, this is always a danger, but a contract employee is more expendable and easier to get rid of.
Or, they weren't really looking for a permanent hire and this was just a means of tricking someone into a contract who wouldn't otherwise take one.
Or, the company hits a rough patch and decides to freeze hiring to avoid layoffs.
There are a whole host of things that could go wrong in a contract situation that are either tempered or nonexistent when you are permanent.
And ... you don't get stock.
And ... when you go perm, your vesting schedule starts after your contract.
And ... if you get hurt/sick while you're a contractor, you're fired.
And ... you often don't have full facilities access as a contractor.
And ... after you go perm, half your co-workers still think you're a contractor, and blow off your email requests.
Not in my field, but perhaps the CtH position doesn't actually want quite what I am good at, and we didn't figure that out beforehand.
> Why are you interviewing if you're employed at a good company?
Because I've been there a long time and I'm bored, or because my boss isn't bad but isn't the best, or because opportunities for advancement seem slim, or because I want a shorter commute, or because the prospective company has a great reputation, or because I think I could work with better engineers somewhere else, or because...
I wouldn't take the chance at a CtH job because there's a significant probability that I end up totally out of a job, rather than employed in an OK (but not great) job.
Of course it's possible that I would accept a FT position and it would turn out just as poorly as a CtH that didn't work out; I would just expect that the probability of it not working out is much higher in a CtH scenario because the company I'm working for has invested far less in me than one that has hired me full time.
It isn't a foregone conclusion that if you have good skills, you will be permanently hired out of a contract position. If it were a foregone conclusion, they would just hire perm.