Hacker Newsnew | past | comments | ask | show | jobs | submit | Oranguru's commentslogin

You can absolutely sync your vault without a paid subscription. Simply save it within your OneDrive or Google Drive folder. Alternatively, you could use Syncthing if you prefer a self-hosted solution.


However, these rights should be guaranteed to a company operating in the USA and strictly adhering to US law. Of course, if the law is (arbitrarily) changed to make this illegal due to the Chinese government's stake, then it could be forced to shut down, but that would be inconsistent with the constitution.


> However, these rights should be guaranteed to a company operating in the USA and strictly adhering to US law.

ByteDance is a Chinese company with it's headquarters in China. The so-called TikTok ban is a call for ByteDance to sell off it's controlling position over TikTok, otherwise TikTok can no longer operate in the US.

The fact that China is spinning this issue as a TikTok ban is telling.


What would you call Tesla not being able to sell cars in China unless sold to a Chinese company - what might you call that? :)


I'd call it "the way things actually were from 1994 until 2022": https://www.carscoops.com/2021/12/china-will-no-longer-requi...


so not “ban”? :)


Giga Shanghai opened in 2019 while that was still in effect, so no, not "ban".


If they want to do that, of course they can. (And indeed, Chinese car companies are already treated differently in US law to such an extent that they aren't in the US market at all.)


You didn't say why that would be inconsistent with the Constitution, you merely asserted that it is. But it isn't.

Our government gets to decide the terms under which businesses operate in this country. Always has and always will. This is not a constitutional question.


Because both of these are based on Chromium (the open-source version of Chrome).


This approach is valuable because it abstracts away certain complexities for the user, allowing them to focus on the code itself. I found it especially beneficial for users who are not willing to learn functional languages or parallelize code in imperative languages. HPC specialists might not be the current target audience, and code generation can always be improved over time, and I trust based on the dev comments that it will be.


You can easily fix this using a grammar constraint with llama.cpp. Add this to the command: --grammar "root ::= [^一-鿿ぁ-ゟァ-ヿ가-힣]*"

This will ban Chinese characters from the sampling process. Works for Yi and Qwen models.


You can access GPUs within containers using CDI (Container Device Interface): https://docs.nvidia.com/datacenter/cloud-native/container-to... No additional tools (e.g., nvidia-ctk) are needed. Docker has recently added support for CDI in version 25.0.


Very interesting. Thanks for the references. Have you released the code or pre-trained models yet or do you plan to do so at some point?


The code is all released already. You find it here: https://github.com/rwth-i6/returnn-experiments/tree/master/2...

This is TensorFlow-based. But I also have another PyTorch-based implementation already, also public (inside our other repo, i6_experiments). It's not so easy currently to set this up, but I'm working on a simpler pipeline in PyTorch.

We don't have the models online yet, but we can upload them later. But I'm not sure how useful they are outside of research, as they are specifically for those research tasks (Librispeech, Tedlium), and probably don't perform too well on other data.


Thank you for your work. I have been having trouble achieving the desired level of formality in the generated text. When I ask for slightly formal content, the result tends to be too formal. However, when I ask the model to reduce the formality or use a semi-formal tone, the text becomes too informal. This will allow me to exercise more control over the style of the model's output and stop constantly battling with it.


This comment does nothing more than repeat the information provided in the title and in a very verbose style. I don't understand the tendency for GPT generated comments that is lately taking over HN and Twitter threads. They are insipid, add absolutely nothing of value to the discussion, and are generally a waste of time to read.


I believe the point is to craft several sock-puppets which can then be sold on to advertisers and SEO folk.


hmmm. This bot would need some karma first?


It's about minimising friction in our tasks and reducing any unnecessary obstacles. Even seemingly minor actions that only take a few seconds can build up over time and generate a sense of frustration, especially when they become frequent. Personally, I have found that dealing with this kind of friction can erode my overall productivity, as I unconsciously shy away from these tedious tasks that involve manual repetition, no matter how small. That's why many of us choose to invest a little time in coming up with these small automations that free us from the clutches of monotonous and repetitive tasks, allowing us to focus on the core of our work.


Very well said. It also feels really good to get rid of small things that annoy you every day by creating your own solutions, cf.:

https://www.joelonsoftware.com/2000/04/10/controlling-your-e...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: