Hi, I don’t have much experience with running local AIs, but have tried to use ChatGPT for psychoanalysis purposes and the smarter, but limited for free users model is amazing. I don’t like giving such personal information to OpenAI though, so I’d like to set something similar up locally, if possible. I am running Fedora Linux, and have had the best results with KoboldCpp, as it was by far the easiest to set up. I have a Ryzen 7600, 32 GB of ram, and a 7800 xt (16 GB vram). The two things I mostly want from this setup is the smartest model possible, as I’ve tried some and the responses just don’t feel as insightful or though provoking as ChatGPT’s, and I also really like the way it handles memory. I don’t need “real time” conversation speed, if it means I can get the smarter responses that I am looking for. What models/setups would you recommend? Generally, I’ve been going for newer + takes up more space = better, but I’m kind of disappointed with the results, although the largest models I’ve tried have only been around 16 GB, is my setup capable of running bigger models? I’ve been hesitant to try, as I don’t have fast internet and downloading a model usually means keeping my pc running overnight.
PS, I am planning to use this mostly as a way to grow/reflect, not dealing with trauma or loneliness. If you are struggling and are considering AI for help, never forget that it can not replace connections with real human beings.
Thank you so much for the suggestion! I tried Q8 of the model you mentioned, and I am very impressed with the results! The output itself was exactly what I wanted, the speed was a little on the slower side. Loading my previous conversation with a context of over 15k tokens took about 10 minutes to get the first response, but the later messages were much faster. The web ui loses connection almost every time though, and I just manually copy the response from the terminal window in to the web ui to save it for future context. I am currently downloading the Q6 model, and might experiment with going even lower for faster speeds and more stability, if the quality of the output doesn’t degrade too much.