Hacker Newsnew | past | comments | ask | show | jobs | submit | alex1's commentslogin

There should be recalls from more manufacturers. Someone I know purchased a surveillance camera with a major brand name (Samsung) from Costco [0] just a few weeks ago that gave me a root shell by simply telneting in as root with no password and no way to reliably set a root password or disable telnet. It was returned the following day. Last I checked, Costco is still selling it. This problem isn't confined to cheap Chinese cameras you can buy online. Vulnerable devices are being sold at major American retailers and they are still on the shelves.

[0] http://www.costco.com/Samsung-SmartCam-HD-Plus-1080p-Wi-Fi-I...


Yeah, this is one reason I still don't have security cameras setup on my home network. If I decide to get them, I am going for a dedicated ethernet network just for cameras and no internet connection. I may allow a VPN to a inside the house server to see footage. According to the Wirecutter, Nest cameras are some of the better commercial one but I've still not bought one or done any review myself.


When we were shopping for a baby cam to keep an eye on the baby, I opted to get a simple RF cam [1] instead of the more popular IP cameras that allow you to use your smartphone and monitor from anywhere.

The lower tech approach means you can park a van in my driveway and probably pick up the signal but that's a lot harder (and more obvious) than scanning an IP range from anywhere in world and finding vulnerable devices.

[1] https://www.amazon.com/Foscam-FBM3501-Wireless-Digital-Monit...


I got a Wansview camera and assigned it a static IP and just don't allow any traffic not originating from the chromecasts or tablet -- it's nice because all the TVs do picture in picture with the baby camera.

Still pretty weird seeing the constant log entries trying to reach a couple servers - I've been doing traffic capture since I'd like to see what it's trying to do. One is obviously the plug-n-play stuff, but it's crazy that those packets apparently get broadcast outside the network (? - I haven't really looked into how that PnP IP/port is handled but it's getting caught at my firewall).


We have IP cameras (Axis) on a dedicated VLAN that doesn't have access to/from the WLAN, and things work pretty well. I don't trust VPN's (NSA clearly watered down the IPSEC standard and can definitely compromise most IPSEC connections [not sure about IKEv2]; OpenVPN is a messy pile of shit that is undoubtedly swamped with vulnerabilities), but do allow a VPN into my camera network. The compromise I made is to send a notification email for each established VPN connection, regardless of how it was established, so at least I'll probably know if someone else connects.

With Nest, you have to use their "cloud" for it to be fully functional, which to me makes it a no-go for anybody like you who is actually concerned with his/her security/privacy.

The most popular IP camera on Amazon is a Chinese camera gets your Wifi password through their app via the "cloud". Fuck that.


>gets your Wifi password through their app via the "cloud".

And? What does it matter that someone has a password that's only good for about 100 metres around your house?

Of all the passwords I have, my wifi password is the one I care least about.

I'd be more worried about what the app itself is doing on my phone - I caught one attempting to update outside of the Play Store. No thanks.


> I'd be more worried about what the app itself is doing on my phone - I caught one attempting to update outside of the Play Store.

If it is Chinese-made, that might just be because the Play Store is blocked by the Great Firewall. Apps in China need to use some other way to update.


This is a great point, but the app in question was Broadlink eControl - https://play.google.com/store/apps/details?id=com.broadlink....


Made in China.


I have my router firewall blocking all traffic to and from the Internet to my cameras. My router also offers OpenVPN for when I need access. It's not perfect, but it provides pretty good protection against someone attempting to use generic methods to compromise my devices as we've seen here.


If you have the interest and knowhow, you can build your own with an RPI. That's what I eventually did.

Admittedly, it's a far cry from an Off-the-Shelf solution though.


If we start down the legislative road for all elements connected to the Internet, where is that going to end up?


"...hackers were able to take over the cameras because users had not changed the devices' default passwords."

"Security issues are a problem facing all mankind," it said. "Since industry giants have experienced them, Xiongmai is not afraid to experience them once, too."

The fact that they aren't scared of having brain-dead security failures in their products is, to put it lightly, telling.


Liability rests with the network itself. Bitching about devices or retailers is pissing against the wind


Keep going with the down votes...

Shall I give you a couple of hundred comments for ammunition.

On the other hand, if you disagree with my perspective, man up and present an alternate perspective.

How is that?


Please don't complain about downvotes. Keep the discussion centered around the content of the article.


Yes you are right

The thing is, I'm seriously concerned about the rhetoric on this thread. There seems to be a general bias toward legislative action and I know it isn't going to go well if that's the way things turn.

My reaction against down votes is pure frustration though. Down votes are a dead end.

I think I'll go back to my happy place...


The downvotes (for comments after the first) are more for the spamming and metacomplaints than anything else.


You can down vote me all you want. Go for it


Oh no there are vulnerable devices on the Internet. Do you have any idea what you are saying?

EVERY device on the Internet is vulnerable, and it makes no difference to Dyn DNS where it was manufactured or how long it had been running without an update.

Zero Day Exploits are real!

Wake up!


> probably the more likely scenario is that some insider leak some of the distributors private keys which would allow certain releases to be cracked - but would likely also trigger key roll over.

It could also be the case that someone leaked the plaintext symmetric key(s) for this specific movie's DCP. If someone gained access to private/secret keys on a compromised DCP player somewhere, it'd be smarter to leak symmetric keys for individual movies to avoid detection.


I think that's because OCI currently only has a specification for the runtime, not the distributable image. But it seems like as of a few weeks ago, work is underway to standardize the distributable image as well: https://github.com/opencontainers/image-spec


I was almost excited! From the FAQ on the project you linked [0]:

> Q: Why doesn't this project mention distribution?

> A: Distribution, for example using HTTP as both Docker v2.2 and AppC do today, is currently out of scope on the OCI Scope Table. There has been some discussion on the TOB mailing list to make distribution an optional layer but this topic is a work in progress.

I really hope CoreOS manage to get distribution into the scope for OCI. We need to move beyond Docker images. Standardizing on-disk layout in OCI is only mildly useful in my opinion.

[0] https://github.com/opencontainers/image-spec#faq


I'm curious to know if there's a reason everyone installs Homebrew in /usr/local (other than it being the default installation path). I've always chosen to install it in ~/.homebrew and haven't had any problems. Everything I install with Homebrew seems to handle an alternative prefix without issue.


brew supports pre-compiled binary packages (what they call bottles) only when you use the default install location /usr/local. If you use any other prefix brew will always compile all packages on your system, which might take a long time depending on the performance of your system. So using /usr/local saves time and energy since all brew needs to do is to download and install binary packages.


Actually, many bottles are marked as safe to install anywhere, so even if you don't use /usr/local you will still get a lot of things installed via bottles (but not everything).


That's a common Unix practice to put system-wide stuff manually managed (as opposed to managed by the OS/distribution) in /usr/local.


Yep, I agree and that's where I install stuff on everything other than OS X. One distinction though, I think, is that most of the stuff I install on OS X I don't want to be available system-wide. I'm (typically) not using Homebrew to install daemons that run all the time or things that serve critical system/network functions, so I've never seen a reason to make them available to the entire system. I agree that goes against the Unix way but I started preferring this way of using Homebrew after I had similar problems upgrading and even updating OS X.

Also, what if for some reason a single machine is shared by two people and they need different versions of some programs installed with Homebrew? Installing everything in /usr/local isn't going to look like a good idea then.


It's not going to look like a good idea even if they need the same version. You're not supposed to run homebrew as root; it runs as your own user instead. If two users try to install things with homebrew in the same directory, you're going to end up with some things owned by one user and some things owned by the other, and things will start failing pretty soon.

A while ago I floated the idea of having a separate, low-privilege "brew" user that installs things, with the brew command automatically switching to that user, but there was no interest.


Homebrew is not really "manually managed" though - I prefer just things I actually manually install in /usr/local


It is if by manually managed we mean _not managed by the system/OS, and not updated without user interaction_. If you update OSX and you had homebrew binaries in /usr/bin, they'd be gone. Homebrew also won't you that there are updates and never updates packages unless you decide to get a new version.


But not to 775 it and change it's owner from root:root (both of which Homebrew does)


For me, because it is the default, and there is section in homebrew FAQ tell that many build scripts breaks if it isn't in `/usr/local/`.


Homebrew is great, but using /usr/local as the default install location just a bad idea.


> in ~/.homebrew

Not sure about OS X but common corporate Unix protocol is to make /home/ noexec.

Non-core ( i.e. outside central package management ) installations go to /opt/


Because it has never caused any trouble having it in /usr/local, and as another user said /usr/local/bin is part of the standard path.

Other than this one-liner, which is done once, there really isn't any extra hassle with using the default.

I'd be more interested in why you didn't want to install it there?


I installed mine in /usr/local/brew because I already had a ton of stuff in /usr/local managed with GNU stow before brew even existed. I don't have any problems with it.

Someone I know puts it in ~/brew and that works just fine, too. His reasoning is that /usr/local/ is for all users, and though he's really the only user on his laptop, it's just wrong to install a bunch of stuff that's just for him into a global user directory.


Mostly because of my (perhaps irrational) OCD in not wanting to touch global system paths or files, even though under FHS /usr/local is where you're supposed to install manually-managed libraries and binaries. I believe Homebrew likes to have its path owned by you instead of root, so I think it makes more sense to have stuff that's going to be owned by me to be in my home folder rather than /usr/local.


Actually it sounds like you will need to run this/restore permissions after every future OS X update.


Right. The OS X image now has to contain a /usr/local directory so that it exists unrestricted after you install the OS (otherwise you would be unable to create it yourself, because /usr is restricted). It has to ship with some permissions, so it rightly ships owned by root. The installer will apply these permissions each time it runs.

Aside: I really wish Homebrew didn't encourage having a single user own /usr/local. If they're going to insist on never needing sudo to install things, it should just default to installing in your home directory.


Hmm, having been on El Capitan since the first beta, this hasn't happened. Do you have anywhere it states that?


https://github.com/Homebrew/homebrew/blob/master/share/doc/h...

"Apple documentation hints that /usr/local will be returned to root:wheel restricted permissions on every OS X update; Homebrew will be adding a brew doctor check to warn you when this happens in the near future."


Perhaps they should consider adding an option to install something into launchd that just does this.


Why did Stellar decide on using Facebook accounts to enforce the one account per person rule, instead of something better like mobile numbers? There are more people with cell phones than there are with Facebook accounts. Acquiring large batches of phone numbers to game the system is harder since it costs money and is more easily detectable.

Even if this guy isn't gaming the system with fake Facebook accounts, I'm sure others already are.


Facebook has nothing to do with this.

If everyone on Earth were given 5000 Stellars right now, how many people do you think would be willing to sell all of their Stellars for a beer?

It's basic economics.


Indirectly they are requiring mobile numbers because they require a verified Facebook account which means giving your phone number to Facebook for verification (at least for me, could be different for other countries).


Sure, they wouldn't be able to proxy all HTTP requests through their own servers like they're doing now, but they'd still be able to do MITM attacks at the IP level. They're already messing with routes to Google Public DNS IPs so they could just as easily mess with routes to YouTube's IPs. I don't think DNSSEC is the solution in cases like this. Somehow getting everyone to use SSL for everything is a much better solution in my opinion.


Can you do a traceroute to 8.8.4.4? If it's actually reaching Google's network, then yeah, they're doing deep packet inspection on DNS traffic. If not, they're probably just routing 8.8.4.4 to a DNS server they control.

If their goal is to manipulate traffic to www.youtube.com (probably to block access to certain videos), another solution would be for YouTube to require SSL for all connections coming from Turkish IPs. Of course, this wouldn't work if they got some Turkish (or other) CA to sign a bogus www.youtube.com certificate.

EDIT: As lawl points out, trying to require SSL on www.youtube.com won't work either, since they could just do an sslstrip type attack.

EDIT 2: Proof that they are in fact messing with routes to Google Public DNS anycast addresses (they're doing to same to OpenDNS): https://twitter.com/esesci/status/449902883933126659


Actually this seems likely, hmmm:

  traceroute to 8.8.4.4 (8.8.4.4), 64 hops max, 52 byte packets
   1  192.168.1.1 (192.168.1.1)  4.260 ms  0.969 ms  0.865 ms
   2  host-92-44-0-42.reverse.superonline.net (92.44.0.42)  7.465 ms  7.903 ms  7.384 ms
   3  host-82-222-174-177.reverse.superonline.net (82.222.174.177)  8.772 ms  13.703 ms  8.482 ms
   4  host-85-29-17-234.reverse.superonline.net (85.29.17.234)  7.736 ms  7.830 ms
      host-82-222-35-54.reverse.superonline.net (82.222.35.54)  11.449 ms
   5  212.156.45.29.static.turktelekom.com.tr (212.156.45.29)  30.518 ms  17.123 ms  8.674 ms
   6  inkilap-t2-1-kartal-t3-1.turktelekom.com.tr.220.212.81.in-addr.arpa (81.212.220.250)  9.945 ms *  15.140 ms
   7  * * *
   8  ulus-t3-4-ulus-t2-2.turktelekom.com.tr.223.212.81.in-addr.arpa (81.212.223.7)  18.020 ms  17.709 ms  15.444 ms
   9  * * *
  10  * * *
  11  * * *
  12  * * *
  13  * * *


Yeah, looks like they're mucking with the routes for Google Public DNS anycast IPs.

EDIT: More evidence that this is what's happening (they're doing to same to OpenDNS's anycast addresses): https://twitter.com/esesci/status/449902883933126659


> another solution would be for YouTube to require SSL for all connections coming from Turkish IPs.

What? NO! They are messing with the DNS results from 8.8.4.4 (Google DNS)

Too early for TLS to do anything. Maybe with HSTS, but I still doubt that HSTS is any effective against state level MITM.


You're right. Maybe if they turned on and required SSL for everyone visiting www.youtube.com and added www.youtube.com to Chrome's preloaded HSTS list and somehow got everyone to use Chrome. Sadly, this probably won't happen, but DNSSEC adoption probably won't happen either. Even with DNSSEC, they could still do deep packet inspection on HTTP traffic going to YouTube IPs and initiate MITM attacks that way.


Why not ditch the current DNS system and use Namecoin? If you have to force some piece of software into users computers, let's do it right at least...


Are you suggesting the government compromised a trusted SSL CA? Or are you just saying they blocked HTTPS?


Huh? The government of Turkey itself is a trusted CA http://www.mozilla.org/en-US/about/governance/policies/secur... Ctrl+F "Government of Turkey"



What application is this?


Here's a diff of that file from OS X 10.8.5 (Security-55179.13) to 10.9 (Security-55471): https://gist.github.com/alexyakoubian/9151610/revisions

Check line 631. Appears seemingly out of nowhere.


Good find!

But, are you sure the 10.8.5 (-55179.13) version isn't in some sense a later, patched maintenance branch compared to a 10.9 (-55471) that might have been frozen earlier? The release dates are very close (~2013-10-03 for 10.8.5; ~2013-10-22 for 10.9), and they might have already been separate branches.

(Is there a 10.8.4 version to compare?)


Good point. I just checked that file ever since it was open-sourced in 10.8 and it stays exactly the same throughout all of 10.8.x (10.8 - 10.8.5) and only changes in 10.9. (You can check for yourself here: http://opensource.apple.com/)

It doesn't seem like what you said is the case here but obviously we're still missing changesets that may have been committed between 10.8.5 (Security-55179.13) and 10.9 (Security-55471). It'd be really interesting to do a git-blame on that file.

EDIT: Nevermind, that file wasn't open-sourced in 10.8. It's actually really old. (Look for directories starting with libsecurity_ssl in pre-10.8 OS X versions.) Didn't find anything particularly interesting in the old versions though.


Thanks for clarifying. I know silly errors like this can slip in, but I hope Apple does a deep x-ray on all circumstances surrounding the change.


This bug of an extra duplicate line looks like a merge issue to me.


So, what does that mean, the goto fail portion of the code. It seems like it will go to that no matter what. What is the end outcome?

Thanks in advanced


At that point the variable 'err' is still zero (noErr). So the function returns noErr, the caller thinks everything is good, and the communication is allowed.


Goto considered harmful, indeed!


Not much a problem of `goto` per se. Rather a problem with if conditions used without code blocks.

Others might say it's a problem of whitespace insensitive languages ;)


From a different point of view, part of the problem is the undefined state of the code at the "fail" label.

Execution will arrive there somehow, but the 'how' is unclear. The word "fail" implies you should reach that point only if there was an error, but that is a bad assumption in this case.

If the real answer to 'how did we get here?' was checked, then the bug could not hide in the undefined behavior. This would not allow a dangling goto to result in a false positive. A false negative will get someone's attention when their web page doesn't load.

Something like this could remove the undefined state:

      goto pass;
  fail:
    if ( err == 0 ) {
    	assert( err != 0 ); // BUG! variable must contain an error number
    	err = kSomeAppropriateErrorNumber;
    }
  pass:
    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;


Haha love the whitespace comment. This also makes you think of the static source-code analysis done at Apple. Surely static tools would have picked this up, no...?


Clang currently does not warn about this, but I'd wager that Xcode will boast a feature in the next version that detects dead code due to early returns/gotos.


Clang may not warn about white space issues, but it certainly does warn about unreachable code if -Wunreachable is on. This code would not compile in my project because the unconditional goto leaves the following code unreachable.


Even if there were braces, the bug would still exist if the extra goto was outside the braces.

It might be more noticeable, but then, the original bug existed because no one noticed.


    return ERRCODE;
Would have produced the exact same bug.


FWIW, here's what I got on an Ubuntu 12.04.3 server running on my LAN. It looks like we should be fine with the defaults on Ubuntu at least. (Obviously always a good idea to use ufw/iptables to block everything you don't need exposed so you don't have to worry about stuff like this).

Before installing ntp (from another host on my LAN):

  $ ntpdc -n -c monlist 192.168.1.50
  ntpdc: read: Connection refused
After installing ntp (from another host on my LAN):

  $ ntpdc -n -c monlist 192.168.1.50
  192.168.1.50: timed out, nothing received
  ***Request timed out
After installing ntp (from the server itself):

  $ ntpdc -n -c monlist localhost
  remote address          port local address      count m ver rstr avgint  lstint
  ===============================================================================
  91.189.94.4              123 192.168.1.50         1 4 4    1d0     54      54
  ...


Thanks, I should have checked this.


> Also, web servers might want to consult NTP servers now and again.

CloudFlare doesn't host web servers for their customers. They forward HTTP/HTTPS requests to origin servers outside of their network (or serve from their cache). I don't think the DDoS traffic actually hit any of their customers' origin servers (assuming origin server IPs are not known by the attackers). But yeah, it still means CloudFlare's incoming pipes being hit with 400Gbps of traffic before they're able to filter anything.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: