Hacker Newsnew | past | comments | ask | show | jobs | submit | tgeery's commentslogin

this article is a bit tough... feels like a marketing piece for some Google report.

did I read that wrong or was this whole analysis based on percentages... like what does 76% -> 24% drop in memory related bugs mean in terms of nominal bugs or nominal bugs / kloc

also, it mostly credited memory safe languages but then also just threw this out from the Google report

> Based on what we've learned, it's become clear that we do not need to throw away or rewrite all our existing memory-unsafe code

tl;dr android may be producing less memory-related vulnerabilities and it's not exactly clear how


this is really nice. well done


maybe Ubuntu? the web development flow (so closely tied to firebase) might suffer a little bit, but I imagine a fork for non-web env's would be quick and welcome


I think I would be very interested in this


Full Stack software engineer in Los Angeles with 6+ years experience. Looking for full-time opportunities.

Location: Los Angeles

Remote: Yes

Willing to relocate: No

Technologies: - Python, JavaScript, Go, PHP, Godot, Docker, Kubernetes Résumé/CV: - https://github.com/tylergeery - https://geerydev.com - https://www.linkedin.com/in/tyler-geery-7b2a0b44

Email: tyler[dot]geery(at)yahoo(dot)com


Mostly linux agnostic, but in case it helps, its was all right for me. The practice exercises at the end of each section offer some hands-on help with some linux tools

https://s3.amazonaws.com/saylordotorg-resources/wwwresources...

as recommended from this course

https://learn.saylor.org/course/view.php?id=84


I have the same problem. Where did a & b come from? Which two vectors are we taking the dot product of? And how is this less expensive?


In the decision function of an SVM, you compute the scalar products of the support vectors (points that are on the margin of your hyperplane, or more precisely, the points that constrain your hyperplane) and your new sample point:

  x· sv
The "z" the article defines is a new component that will be taken into account in the scalar product. A more mathematical way of seeing that is that you define a function phi that takes an original sample of your dataset, and transform it into a new vector. In our case, we simply add a new dimension (x3) based on the two original dimensions (x1, x2) that we add as a third component in our vector:

  phi(x) = [x1, x2, x1² + x2²]
The scalar product we will have to compute in our decision function can then be expressed as (this is the a and b in the article, i.e. the sample and the support vector in our new space):

  phi(x)· phi(sv)
The SVM doesn't need phi(x) or phi(sv), but the scalar product of those two numbers. The kernel trick is to find a function k that satisfies

  k(x, sv) = phi(x)· phi(sv)
and that satisfies the Mercer's condition (I'll let Google explain what it is).

Your SVM will compute this (simpler) k function, instead of the full scalar product. There are multiple "common" kernel functions used (Wikipedia has examples of them[1]), and choosing one is a parameter of your model (ideally, you would then setup a testing protocol to find the best one).

[1] https://en.wikipedia.org/wiki/Positive-definite_kernel#Examp...


Thank you. This was an amazing explanation. I am new to SVM's but did not make the connection that margin points (observations along the margin of the hyperplane) become your support vectors. This makes a lot more sense.

And if I am following correctly, it would make sense that the final step would then be:

We would maximize the dot product of a new observation with the support vectors to determine its classification (red or blue)


During the learning phase of the SVM, you try to find an hyperplane that maximizes the margin.

The decision function of an SVM can be written as:

  f(x) = sign(sum alpha_sv y_sv k(x, sv))
Where sum represents the sum over all support vectors "sv", "y_sv" represents the class of the sample (red=1, blue=-1, for example), "alpha_sv" is the result of the optimization in the learning phase during the learning phase (it is equal to zero for a point that is not a support vector, and is positive otherwise).

The decision function is a sum over all support vectors balanced by the "k" function (that can thus be seen a similarity function between 2 points in your kernel), the y_i will make the term positive or negative depending on the class of the support vector. You take the sign of this sum (1 -> red, -1 -> blue, in our example), and it gives you the predicted class of your sample.


Thanks for bringing some saner notation in here. I feel like blog posts and journal articles that abuse notation like this one just make people more allergic to math.


The whole time i could not stop thinking of this wozniak quote. Although... I guess he eventually had a cofounder.

http://www.brainpickings.org/2012/01/18/woz-on-creativity-an...


If they just made the page wider, I might be able to read their code samples with minimal effort...


I know this price is pretty on par with the likes of Northwestern and other great schools that offer similar online programs. But it seems a bit insulting when schools like Georgia Tech are offering a $7k online Masters CS. A lot of the infrastructure of modern college is online anyways, so I'm not sure that putting the lectures online and removing the facilities completely is progressive. Learning to scale these classrooms efficiently and continually reducing the fixed cost per student seems to be what we're (me, without the dedication/job willing to pay for this degree) waiting for. Especially, if the goal really is to provide an education available to all online


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: