Hacker News new | past | comments | ask | show | jobs | submit | slhck's comments login

Some thoughts based on my anecdotal experience — but it depends on the price you are willing to pay.

You can get quite good webcams for $100–300 (from Insta360, Obsbot, Logitech maybe …) which work out of the box with USB-C and have mostly okayish software that supports changing things like brightness, white balance, etc. These however still have small sensors and cannot achieve a good shallow depth-of-field (bokeh). Running them at higher sensitivity (ISO), e.g. in darker environments, inevitably causes noise. But if you just want to participate in meetings, it does not matter. I had a Logitech StreamCam and upgraded to an Insta360 Link 2C, which is definitely much better but still not on-par with a proper camera. You should at least get a good keylight or ring light.

The next step up would be mirrorless cameras with built-in or interchangeable lenses made for vlogging, which also can be used like a webcam. They have much bigger sensors and better image quality at a pricing point of $400-1000, e.g. Sony ZV-E10 II, Fuji X-M5, Canon EOS M50 Mark II, … most of them claim webcam support with the provided software. Fuji's software is bad though, so I wouldn't recommend it on a Mac. I can't talk about the other ones. The benefit is that they also have a flip screen that you can use for better framing. They all support webcam modes.

If you have a camera that has an HDMI output and that outputs a clean HDMI signal (without any overlays), you can also buy an HDMI USB capture device and feed that into OBS, which allows you to set up a virtual webcam. There are cheap no-name USB capture cards that produce mediocre images, and more top-of-the line ones like the Elgato Cam Link. This should be the most device-independent variant where you're also not dependent on any vendor's proprietary software.


Thank you for a comprehensive answer, I appreciate the time you put into it.


Since when is the WordPress ecosystem this … bad? I built WP websites 10-15 years ago and it was a quite straightforward experience back then. These days, there seems to be no around themes and plugins that all have very limited free versions, and constantly nag you about upgrading to the pro version, in a million different styles of banners and popups. Hosting providers have made it easier to deploy WordPress in a one-click manner, but anything beyond a basic page (sending email, backups, contact forms) already turns into a nightmare. No thanks!


Gave it a bunch of technical papers and standards, and while it's making up stuff that just isn't true, this is to be expected from the underlying system. This can be fixed, e.g., with another internal round of fact-checking or manual annotations.

What really stands out, I think, is how it could allow researchers who have troubles communicating publicly to find new ways to express themselves. I listened to the podcast about a topic I've been researching (and publishing/speaking about) for more than 10 years, and it still gave me some new talking points or illustrative examples that'd be really helpful in conversations with people unfamiliar with the research.

And while that could probably also be done in a purely text-based manner with all of the SOTA LLMs, it's much more engaging to listen to it embedded within a conversation.


The underlying NotebookLM is doing better at this - each claim in the note cites a block of text in the source. So it’s engineered to be more factually grounded.

I would not be surprised if the second pass to generate the podcast style loses some of this fidelity.


Last time I checked, they did. In fact their 2FA system is so messed up that it thinks my mobile number is an authenticator app, and so I can't even request a code to delete the 2FA method, let alone add a new one:

https://i.imgur.com/PoZ2ssc.png https://i.imgur.com/heiJer6.png


Unfortunately, not in a business context, where marketers can claim "legitimate interest" in various ways. Also, in which way would it matter that they are illegal? Random companies keep sending them anyway; there are virtually no legal repercussions here.


Some countries provide some official places to complain about cold calls/emails, so at least it puts the sender at risk.

It boils down to a risk/reward trade-off, but I doubt that someone would as easily send thousands of spam mails, and also publicly boast about it


Curious, do you mean in business to business?

Otherwise I don't think you can argue any legitimate interest.


Yes, I mean cold sales emails – marketers reaching out to CEOs or other decision makers, selling them staff augmentation services, growth hacking, marketing support, lead generation, design services, etc. They'd claim legitimate interest by "personalizing" the email and claiming that it is relevant for you in a business sense. (Anyway, I don't think that these are fully compliant with GDPR either, because most often, they will have scraped your email address from somewhere, and do not provide a way to unsubscribe.)


Interesting, In my country that also seems to be illegal, e.g. cold calling can never be a valid reason(without consent).

https://www.ris.bka.gv.at/NormDokument.wxe?Abfrage=Bundesnor...

Edit:clarified never


Just saw this, and as a small business owner in the B2B market, this sounds very useful. Gmail's existing spam filters do not reliably detect this type of marketing.

I wish your landing page had a simple "how it works" explanation with a screen shot or diagram, rather than forcing me to sign in directly, and also allowing the app to read *and* send emails. Also, I don't see any pricing?

Finally, signing up, I got an error:

Error 1101 Ray ID: 89d4e0957c2f5a44 • 2024-07-03 06:39:15 UTC - Worker threw exception


Thanks for the useful feedback! Totally forgot that pricing was never added to the landing page → have added to the todo list to fix up.

Where in the process did that error occur for you?

I see in the logs that an error registered, but unfortunately no detail attached. I've beefed up the logging a bit in the onboarding journey on my side to see what could be breaking here if we try again.

Mind trying to log-in/sign up again? You can use "HACKERNEWS" as a promo code, which would make the first month free.


The error occurred right after granting permissions from my Google account. The permissions were granted but I could never access your application page. I just tried again, now I got an "Error handling OAuth callback" after granting permissions. Signing in again does not work either. (I did remove all of the app's permissions in my Google security settings before, so to Google it looked like the application was requesting all of its permissions again.)


I do see it in the logs now. So weird, as dozens of people successfully signed up without this issue. Have added more logs now again to double down on that specific area where this issue is caused. Maybe another login attempt now will be able to uncover the gap.

Thanks for removing the permissions in Google, as that's also key in this debugging.

Mind if I send you an email to debug further there?


Quick shoutout to slhck for helping me debug and resolve this issue. Thank you!

tl;dr: Ran into issues because the DB was expecting a profile picture URL from Google auth (string) or NULL, but JavaScript being JavaScript tried to insert "undefined".


I agree it would be good to clarify this. I have a product that does some background job processing using Sidekiq and Redis, and it seems that this would not constitute "making the software available", in particular where it says:

Making the functionality of the Software or Modified version available to third parties includes (...) offering a product or service, the value of which entirely or primarily derives from the value of the Software or Modified version (...).

Since the value is not _primarily_ derived from using Redis, I guess it's fine. I am sure the majority of projects using Redis in some way do not derive their main value from Redis.


Generally, capitalize longer words. But this really depends on the style book you choose to follow (MLA, APA, CMOS). See: https://en.wikipedia.org/wiki/Title_case


> This is overrated - of course that's how you do it, what else would you do?

That's not what has been done previously for adaptive streaming. I guess you are referring to what encoding modes like CRF do for an individual, entire file? Or where else has this kind of approach been shown before?

In the early days of streaming you would've done constant bitrate for MPEG-TS, even adding zero bytes to pad "easy" scenes. Later you'd have selected 2-pass ABR with some VBV bitrate constraints to not mess up the decoding buffer. At the time, YouTube did something where they tried to predict the CRF they'd need to achieve a certain (average) bitrate target (can't find the reference anymore). With per-title encoding (which was also popularized by Netflix) you could change the target bitrates for an entire title based on a previous complexity analysis. It took quite some time for other players in the field to also hop on the per-title encoding train.

Going to a per-scene/per-shot level is the novely here, and exhaustively finding the best possible combination of QP/resolution pairs for an entire encoding ladder that also optimizes subjective quality – and not just MSE.


> exhaustively finding the best possible combination of QP/resolution pairs for an entire encoding ladder that also optimizes subjective quality – and not just MSE.

This is unnecessary if the encoder is well-written. It's like how some people used to run multipass encoders 3 or 4 times just in case the result got better. You only need one analysis pass to find the optimal quality at a bitrate.


Sure, the whole point of CRF is to set a quality target and forget about it, or, with ABR, to be as good as you can with an average bitrate target (under constraints). But you can't do that across resolutions, e.g. do you pick the higher bitrate 360p version, or the lower bitrate 480p one, considering both coding artifacts and upscaling degradation?


At those two resolutions you'd pick the higher resolution one. I agree that generation of codec doesn't scale all the way up to 4K and at that point you might need to make some smart decisions.

I think it should be possible to decide in one shot in the codec though. My memory is that codecs (image and video) have tried implementing scalable resolutions before, but it didn't catch on simply because dropping resolution is almost never better than dropping bitrate.


Precisely this! There's no need to code the same information twice; it just means you have to think more about what the graph shows.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: