Hacker Newsnew | past | comments | ask | show | jobs | submit | unfug's commentslogin

One of the things that I've learned in recent years closely relates to several of those points: There is no shame in being wrong, it is much more important to quickly adapt to new information that has since proved your original assumptions incorrect.

Especially on the non-IT side in the corporate world, it seems like a lot of people have an irrational fear of making a decision on anything in case that decision ends up being wrong. Instead you end up with either a watered down solution that doesn't really solve the problem or the decision to delay the problem entirely.


> There is no shame in being wrong, it is much more important to quickly adapt to new information that has since proved your original assumptions incorrect.

It is also important to create a culture that doesn't punish mistakes excessively. You want smart, creative people taking some measure of risk. If they perceive they will be punished for small mistakes, they'll stop making mistakes (which means start doing only mundane boring things) or will just leave.


That's an interesting idea. Did the original author of the work walk the junior team member through what was going on, or did they just point them at it and have them check through the work on their own?

It seems like that could be helpful. My main concern is that a lot of junior devs need some specific direction (as opposed to just telling them to go read some code) so we might need to setup some specific expectations for the results of the code review (maybe have them help write documentation?).


In the situation I described, review [of drawings] was part of the QA process, and going through the QA process was a requirement before anything was "shipped" [sent to the plant for fabrication]. So all that happened was that the normal roles for junior and senior staff were reversed for a small portion of the normal work-flow [i.e. a senior staff member did a lot of QA and a little production and a junior staff member did a lot of production and a little QA].

If review was just something that was done sometimes [and those sometimes being either slack periods or seen as part of correcting a situation in need of remediation] then it would not have made sense. It was only due to the fact that the junior staff member's sign-off was required "to ship," that the process was meaningful.

One of the side-effects was that some of my wild and crazy ideas that arose from not knowing any better were identified as seriously off target. Another side effect was that some of those wild and crazy ideas got adopted as standard methods because they worked and improved workflow.

If it isn't clear, I don't think code review assignments as punishment or busy work are worth pursuing.


We're currently transitioning away from TFS and I think locking down changes to pull requests (and potentially blocking most devs' access to commit to master) will help quite a bit. You can pull off a similar workflow in TFS, it's just much less convenient.

I think being able to see the full end-to-end diff for a task/story (as opposed to digging through a series of changesets in TFS) will make the benefits of any suggested refactorings much more obvious.


Off hand I can think of a few reasons for using the mapping object as they did in angular-translate:

- It may make sense to translate the same English sentence to something slightly different in different contexts in German.

- JSLint/JSHint should be able to check for usages on an identifier like TITLE, PARAGRAPH_ABOUT_FOO, etc. easier than comparing strings (you'd have to build something custom for that).

- Brevity. If you have the same long string of text written twice it may be useful to have a shorter identifier for it.

I agree with not translating language names though, in the real world that doesn't really make sense.


> It may make sense to translate the same English sentence to something slightly different in different contexts in German

Then obviously you add context. It all has been solved for xx years http://www.gnu.org/software/gettext/manual/html_node/Context...

> JSLint/JSHint should be able to check for usages on an identifier like TITLE, PARAGRAPH_ABOUT_FOO, etc. easier than comparing strings (you'd have to build something custom for that)

That's not difficult if you absolutely need it

> Brevity. If you have the same long string of text written twice it may be useful to have a shorter identifier for it

You don't need them twice. Add attribute telling: this text can be translated. That's all.


I'm in Louisville and we have had a hell of a time trying to hire quality developers. The pool of quality talent here is much smaller than the demand.


Google-stalking you just now, it looks like you work at a place called ignew which has seven employees and annual revenue of $1 million.

The company site doesn't even have job listings though. This is a problem. In a search I found a single job listing for your company on the web for an iPhone+Android developer. This was on a set of sites under something called MyCareerNetwork.com, which I had never heard of before as a job board. Checking for jobs within 100 miles of Louisville on Joel's board, where many microsoft developers hang out and which is much more well known, there are 0 search results anywhere near your city so I know you are not currently advertising there.

If I've guessed correctly about where you work, then you guys should probably at least advertise in known locations if you are really having problems finding people.

I would assume of course that since you can't find anyone locally you are doing the basic hygiene issues such as offering good packages, full relocation, and paying for interview expenses. But the one ad I found specifically says you won't pay relocation. This sort of thing is a huge red flag.

So, you say you are having trouble finding people. It doesn't seem to be a problem with the market though based on my brief investigation but is a problem with not trying hard and not providing even the minimal incentive necessary.

You should be aware that iPhone and Android skills are both quite hot right now and developers that are expert in both are extremely rare and command a premium. You should be looking at paying full relocation and salary in the $250k range. That's the market.


Office design should reflect what the employees in the office are trying to accomplish. As a developer, all of that crazy stuff in Zappos' office would be a huge distraction to me. I haven't checked, but I'd be a little surprised if developers' cubes/offices were as over-the-top at Zappos as those for their customer service reps.


With services like Rdio, Netflix, and Steam it's actually easier for me to get what I want legally now. Lots of people don't pirate because of the cost, it just used to be the easiest/fastest way to get music/movies/games.


"It turns out that there is something that can compete with free: easy."

http://www.time.com/time/specials/packages/0,28757,2032304,0...


Lots of people don't pirate because of the cost, it just used to be the easiest/fastest way to get music/movies/games.

Rhapsody opened in 2001. If it took 9 years for music piracy to be "over", then I think there's something more at work than the "it's easier to pirate" defense.


I know it worked for me personally, since amazon mp3 is around, I have no needed to look elsewhere. I do however find it annoying to have to keep a VPN going in the US to download music from Amazon in Canada. The restriction is ridiculous because it provides no real safety because there is no alternative for users. They will pirate instead. It's just dumb politics.

The Canadian version of the RIAA is even dumber than was originally though possible.


I know it worked for me personally, since amazon mp3 is around, I have no needed to look elsewhere.

Amazon MP3 opened to the public ~3 years ago. Did you pirate your music before? What requirement did it satisfy? Multi-platform-available DRM-free watermark-less high-bitrate MP3s?


That's pretty much the reason(del free, no watermark and quality). I'be used amazon for at 2 years now and yes I used to pirate music. Started with.napster and use everything in between untilI discovered amazon mp3.

I guess the fact that I've been employed for the past 5 years and had money to spend on music also helped change my habits. Hence why I'm against suing college student who become your customers after.


Indeed, the argument about pirating's ease being its biggest pro ignores one of its largest demographics—teens and college students with no or little money but a huge appetite for media.

(If you want to get pedantic about it, I suppose you could say that this is just an extension of the definition of ease-of-use; members of this demographic would have to go find and work a job in order to obtain media legally. In that sense, pirating is easier.)



I haven't verified that Digg does this, but most big sites that do logins over http do some sort of hashing on the client side so that the actual password isn't sent plaintext.

As has been mentioned, there are all sorts of issues with using https, and for sites where security isn't a huge concern (it's always important, but not as much for a site like HN vs. a bank), hashing client side and sending the data over plain http can be enough.


Hashing the password client side and then sending it in plain text does nothing. In that case you don't need the password to log in; you only need the hash. Intercept the username and hash over an HTTP connection and then impersonate that user. It makes no difference to the person stealing accounts and only means more work for the hapless developer who thinks this is some form of security. You'd be better off doing nothing. It's just as secure (or rather insecure) and less work.


I greatly over-simplified the algorithm in my initial post, but there are advantages to a straight hash of the password.

Even if you can intercept the person's hash and login to that site, you still don't know the user's original plaintext password. A lot of people re-use the same password throughout many (or all) of their online accounts. The attacker may be able to impersonate the user for the current site, but they don't have their actual password to gain access to other sites.

What I was talking about in my original post (but didn't properly explain) was using hashing as part of a larger algorithm for doing a challenge/response login. The idea is explained here: http://pajhome.org.uk/crypt/md5/auth.html

I agree that HTTPS is the way to go if security is a huge concern (like for banks), but for a simpler site you could get away with using CHAP and still provide a reasonable level of security.


I've been using PHP at work and on personal projects for years. It has a very low "barrier for entry", which is both good and bad. It's easy/cheap to hire entry level programmers to write simple PHP, but it's all too easy to let them jump into big projects too soon and write something that turns into a huge mess. Well written PHP code can be just as easy to read as Python/Ruby/etc., but PHP makes it far too easy to write something that is a horrible mess.

At work we are still mostly a PHP shop, but anything I write on my own now gets done in Python. PHP's complete lack of a coherent function naming scheme is enough reason alone to get away from it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: