Hacker Newsnew | past | comments | ask | show | jobs | submit | DropkickM16's commentslogin

lol


You are seriously begging the question here.


Yes, as the world moves more and more toward online commerce, a statute specifically crafted for "fraud using transmission of information over wires" is just the modern tool we need to fight back.


Brett_Kavanaugh: U.S._Circuit_Judge_(2006–2018) https://en.wikipedia.org/wiki/Brett_Kavanaugh#U.S._Circuit_J...


No, as the employee option pool is generally common stock which is instead taxed at the most recent 409A valuation. The preferred stock issued in the VC deals you're talking about usually has liquidation preferences and other terms attached that make it more valuable.

https://www.quora.com/What-is-a-409A-valuation


I think your first point is obvious, but I disagree with your second, at least 'at-scale'.

In the case where you have loose coupling but are representing multiple entities that scale in different ways, microservices allow you to separate concerns and separately scale those concerns relative to their requirements in terms of memory/CPU/disk/network/etc. The best factored code running in a single horizontally-scaled layer will be inefficient if 90% of requests are manipulating entity A, and entities B, C, and D have a lot of intricate business logic but are rarely touched (they are better off if separated and scaled individually)

The overhead you allude to is definitely something to take into account. If you're a 5-20 person startup without a serious need to scale up or lacking people who have built the tools that make microservices easy, you should avoid the issue for now. But ultimately, decoupling services so they can horizontally scale independently is a huge win.


True, but if you prematurely divide your services up based upon what you think their performance requirements might be you will be wrong. That's premature optimization, which is, as we all know, the root of all evil.

If you've loosely coupled your services until the point where it becomes obvious that two parts of the code have markedly different performance requirements and then you decide to split them into two separate services then yes, that could work, provided you understand the trade off you're making.

I don't think that's typically what people mean by 'microservices' however.

There's a good chance it'll still be wasted effort, too. Hardware is cheap. Developers are not. That applies to large businesses and small.


I always saw the point of micro services as not having to figure out the scaling part yet.

If you focus on grouping by purpose rather than what resource they might use then you can keep them on small instances until you better understand what kind of resource they require.

Once you learn their usage pattern you can adapt more quickly (if scale is needed at all) and not have to first split up the code.


You can always add a version=v1 to the query parameters and use that as an override when performing content negotiation. It's still not terribly convenient.


It depends on how your API is designed. If it's a tightly coupled RPC-style API or something, this is obviously a bad idea because you'll break every client that didn't see the change coming. But the goal of designing APIs in a hypermedia style is to eliminate this tight coupling and include in each response all the information that a client would need to traverse the application's states. When this is designed properly, it is easier to change the API's functionality without breaking existing clients.

The web is a great example of this (although you may have to squint a bit to see it). Browsers don't need to add additional code or install plugins to handle forms with different fields or links to content of different types, because the semantics of those elements and their interactions are well-defined.


I figured out how to insert strings with quotes on level 6 - if you use a param list like username[]={string"with'quotes'"}, it bypasses the safety check but still gets coerced to a string by the ORM. Unfortunately, I wasn't clever enough to actually do anything with that...


I don't think adding metadata to your responses goes against the ideas of HATEOAS. On the contrary, HATEOAS requires enough metadata about the application state and possible transitions so that a consumer can fully interact with the API without relying on out-of-band information (URL patterns being the most common example).


Some people argue that whatever you get back from the request should be the exact json/xml/whatever object representation.


Can you point me to a source for this argument? I'd be interested in taking a look at the reasoning behind it. Obviously, that's an approach that a lot of people take for pragmatic reasons, but it doesn't seem to allow the kind of hyperlinking that's the core idea of HATEOAS. Of course, the term "object representation" is pretty generic, and may obviously somehow include links to related resources and application states.


I doubt Fielding would agree:

"The model application is therefore an engine that moves from one state to the next by examining and choosing from among the alternative state transitions in the current set of representations."

http://weblogs.java.net/blog/mkarg/archive/2010/02/14/what-h...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: