Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: SlickGPT (slickgpt.vercel.app)
72 points by Shackles on March 27, 2023 | hide | past | favorite | 66 comments
SlickGPT is a light-weight "use-your-own-API-key" ChatGPT client written in Svelte. It offers GPT-4 integration, a userless share feature and other superpowers.


> SlickGPT allows you to run your own local ChatGPT instance

No, it doesn't. If it did, it would be a 700GB+ download and not need OpenAI API keys. Wish folks would stop saying it.


German press is just as bad:

https://www.golem.de/news/ki-auf-dem-intel-8088-entwickler-b...

"Entwickler bringt ChatGPT auf MS-DOS zum Laufen"

in English:

"Developer gets ChatGPT to run on MS-DOS".


This portion is especially hilarious:

"Auf den Entwickler kam allerdings direkt eine weitere Hürde hinzu: MS-DOS integriert nativ keinen Netzwerk-Stack oder eine entsprechende API. Die ist allerdings für den Zugriff auf ChatGPT zwingend notwendig."

DeepL:

"However, the developer was immediately faced with another hurdle: MS-DOS does not natively integrate a network stack or a corresponding API. However, this is absolutely necessary for accessing ChatGPT."

Golem is a very highly frequented German technology website.


My favorite is: I am training GPT to...

what they are doing: adding a prompt


ChatGPT is the name of a frontend app/service (albeit a first-party one). The model itself is called gpt-3.5-turbo. I think you're wrong on this one.


> We’ve trained a model called ChatGPT which interacts in a conversational way.

> ChatGPT is a sibling model to InstructGPT

OpenAI's own words


A little outdated, though. I consider them to have renamed the model, since there's no such model as ChatGPT available via API and the docs now explicitly mention gpt-3.5-turbo is the model name (again with no mention of ChatGPT). You have Chat mode in the Playground, but no model like that. And in ChatGPT interface itself they say things like "based on old gpt-3.5-turbo model".


> I consider them to have renamed the model

I don't, and it's not outdated, it's current.


You're right. Fixed wording in our README on Github, can't edit HN post. Thanks


As with other OpenAI based applications, I don't feel comfortable sharing my private API key with an application I don't know. Especially when the webpage isn't telling me what this app is about. I suggest adding a small demo video or at least adding some bullets on what this app is trying to solve.


I get that fear and it's fair. SlickGPT is OS, self-hostable and just runs locally, so you can check the code and run your own instance if you don't trust that ours running on Vercel is on-par with the one in the repo. You can also put a soft and hard cap on your API quota on the OpenAI page and monitor it to verify that only you are using it.

The key is passed (see https://github.com/ShipBit/slickgpt/blob/main/src/lib/ChatIn...) to a Vercel edge function (see https://github.com/ShipBit/slickgpt/blob/main/src/routes/api...) that does the actual call to the OpenAI API. This function is stateless and doesn't store anything. I also put it in your localStorage using a store (see https://github.com/ShipBit/slickgpt/blob/main/src/misc/store...) so that you don't have to enter it over and over again.

As for the "what is this trying to solve" - read our documentation in the GitHub repo over at https://github.com/ShipBit/slickgpt.


It's a self-hostable chat interface for the OpenAI GPT API.

From their Github Readme:

SlickGPT allows you to run your own local ChatGPT instance, host it yourself or just use our instance if you like. Users bring their own OpenAI API keys. SlickGPT offers them a very fancy user interface with a rich feature set like managing a local chat history (in the localStorage), a userless "Share" function for chats, a prominent context editor, and token cost calculation and distribution.

https://github.com/ShipBit/slickgpt


> SlickGPT allows you to run your own local ChatGPT instance

I find it interesting that so many projects claim to be "run your own ChatGPT", when they're in fact "run your own web UI". Are people really so ignorant of how this works that they're equating these concepts, or believing that the web UI is anything more than a razor thin veneer?

I guess maybe people use ChatGPT to refer to the interface, and GPT-N to refer to the models, but that's not very accurate given the amount of tuning, it's much more accurate to say that ChatGPT is a productised GPT instance, with a web UI.


It's just a hot marketing strategy, it has a totally different reception outside of HN. There are already people making 5k+ MRR with apps that are just a chatGPT wrapper that passes user input with some additional context and prompts.

I could easily see a wave of small startups that are just a few inputs and the context needed to generate documents people currently write by hand. Like subpoenas, doctors letters to insurance companies, etc. You can stand one up with a bit of industry knowledge and a couple 100 LOC.


I expect this, too. Scammers are gonna scam.


> I find it interesting that so many projects claim to be "run your own ChatGPT", when they're in fact "run your own web UI". Are people really so ignorant of how this works that they're equating these concepts, or believing that the web UI is anything more than a razor thin veneer?

Perhaps, but also there is a huge incentive to blur that line, obviously from the developers of these UIs, but also from the people at media outlets covering them. If they told the truth, it wouldn't be as big of a story. And I'm just waiting for the "7 year old makes their own GPT" story. For a long time now, the overwhelming majority of tech journalism has been just uncritically rephrasing press releases.


For it to be good, it should also have a specific system prompt and chat history, which is fed to the model. With langchain this isn’t too difficult to build, but it’s not just a web ui.

This one doesn’t appear to use a custom prompt, so I’m guessing it doesn’t perform as well as ChatGPT, but it does look like it passes message history.


SlickGPT passes message history including a "context" or "system" message which it clearly explains to the user at the start of every chat. This is probably the most important thing the official chat.openai.com client fails to explain or expose to users directly.


But you can do all that with the OpenAI Playground for the chat API, and it supports GPT-4. I don't really see what your service adds to that.


You can use the same model that ChatGPT uses through the API, and it isn't called 'ChatGPT' it's called 'GPT-4'. So at least officially ChatGPT refers to the webUI not the model.


I'm not sure that's true though. The first line of the ChatGPT blog post is:

> We’ve trained a model called ChatGPT which interacts in a conversational way

It really seems that GPT-3/3.5/4/etc are the generic models, and that ChatGPT is the fine tuning, a safety layer, and so on. This is also why ChatGPT and Bing don't provide the same or even remotely equivalent answers despite both running on GPT-4.


As far as I know the current chatGPT running on the chat.openai website is specifically gpt3.5-turbo .


Its like when everything became an 'app' all of a sudden.


ChatGPT Plus is like $10/mo but bring your own interface and its fractions of a penny per API call. So for most, this will be cheaper.


In this case it is.

Lot of others posted are other models that people could use.


100% agreed! I don't feel like sharing my API key either! Too bad, I wanted to see if it can understand Simplified Chinese.


OpenAI’s API key page lets you issue new keys, see when they were last used, and withdraw them at any time.

So this seems a bit paranoid.


Stripe's dashboard lets you do the same thing but like hell I'm giving you my secret key.


This is a standard way to grant API access in literally every piece of software that's provided an API access I've ever used in the last 5-10 years. I'm not sure what informs such paranoia.


Absolutely incorrect. This is the literal purpose of OAuth.


True, but it is still weird to paste your credentials into an unknown service.


Yes of course, it certainly is. So before you do that, you should make the service not be unknown to you.

Good thing this is open source!


I’m not sure if there is any way to tell if an open source service is actually using the open source code. It still comes down to trusting the url.


If you're using a third-party UI for ChatGPT, it might as well be a browser extension [0] or a native app [1]. It really shines when you break it out of its website interface and start using it like a command palette/context menu.

[0] https://sublimegpt.com

[1] https://macgpt.com


I'm beginning to think there's one too many of these kinds of apps, but if anyone is interested in a cross-platform (Mac, Linux, Windows) alternative, I've built a command-palette type interface [1] to GPT that supports code blocks/LaTeX too.

[1] https://github.com/JinayJain/gpt-anywhere/releases/latest


There are. But as NVIDIA has proven, being the last tool of a type to be developed isn't necessarily a futile undertaking.


It’s the new hello world. Here’s a Telegram bot: https://github.com/danneu/telegram-chatgpt-bot

https://t.me/god_in_a_bot


Couldn't agree more, that is exactly why we built Buddy - an open source chrome extension with a cool cmdk interface to OpenAI's API. Everything runs locally (besides the API calls). HN post - https://news.ycombinator.com/item?id=35299433 ,GitHub - https://github.com/INT-Calutt/buddy.


Agreed! I've found that staying in my workflow leads to fewer inadvertent context switches (like ChatGPT in-browser -> HN -> ... -> rabbit hole). MacGPT looks great. I created a similar plugin for my Windows keystroke launcher (think Alfred/Spotlight) for the same reason. If anyone's interested, you can check it out here: https://github.com/dkgv/pinpoint


Yeah I don't even understand the point of this SlickGPT.


It's a userfriendly web client for the latest OpenAI chatCompletion API with additional features compared to the official chat.openai.com client (mainly: "Share chats" & better param customization), it's Open Source and can be run locally or self-hosted easily. It's built in one of the most popular frontend tech stacks of the last years. Not more and not less than that.


Have you tried regularly using the ChatGPT interface from your phone? It’s obvious that the UI is written by people that don’t take UX seriously.


Clean UI, loads fast, but there's significant latency when I type a prompt into the text box... probably client state being debounced or something


This. There is very noticeable latency when typing. This is on a very powerful desktop PC using Chrome. If you hold down a character on your keyboard it shows the issue more clearly, the input will sporadically appear every few seconds.


Okay, this shouldn't happen. Please create a GitHub issue, I'll investigate.



I just started putting together these Chat API-based UIs in this repo, in case anyone's interested: https://github.com/itsuka-dev/awesome-chatgpt-ui


Heh, I wrote one too (in Elixir + LiveView) - https://github.com/dvcrn/chatgpt-ui

Background was that we wanted to explore providing access to ChatGPT to employees at $company, but restrict access with Google OAuth using company email, and use the company OpenAI API key on the back. So I hacked together something quick that matches our requirements and we can self-host.


I'm missing the purpose of this.


It seems to me that ChatGPT's history archive is down 98% of the time, so that would be one reason for using a custom web UI.


Has anyone ever needed that history? I sure haven't. It's like having a copy of every google search and result you ever got, entirely pointless.


I needed it many times (and it was often unavailable). When I wanted to continue with previous conversation or refer back to something GPT said.

I even experienced it indirectly when I asked someone on reddit about a prompt they have used to make something (generate a pretty good video script) and OP said that they would gladly share it, but unfortunately the history was down.

I don't use GPT to get information in a search-engine manner.


I think it'd be better if you could just 'save' the ones you want to keep, the rest are automatically discarded.


I found it very helpful when writing a blog post series about teaching GPT to translate ROT13-encoded text--instead of documenting each prompt as I went, I could just get into a flow state with the model and try a bunch of things, going back later to pick the moments that gave me an insight.


Yes, all the time, it's very frustrating to use it without it


Design wise it's not bad, but for some reason it takes a while second for the words I type on my phone to appear in the prompt area. Quite a pity.


Is it slow while typing on your phone or is it slow showing the characters in the input field? Which phone do you use?


It's very slow when typing characters in the input field. It feels like a react application that's doing a bunch of component re-rendering on every keypress.


Got it, it's a bug and happens because of the token calculation. We'll fix it. Thanks for reporting.


Like the big ui, reminds me of 37signals (I think?)


Typing is very slow, that needs to be fixed.


Do you mean if you type something or if you retrieve the answer/completion from ChatGPT? Which model are you using?


I tried and just typing letters is slow, they appear ~1 second after typing. Deleting text is slow also. I'm under Linux+Chrome.


Simple, clean UI, I like it.


Why?


I was wondering the same. The HN cheerleading for members of the tribe makes this place feel like a recipe blog. "Looks yum! Can't wait to tryy." 5 stars.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: