Unity GPTits [v0.4.10] [MultisekaiStudio]

3.30 star(s) 6 Votes

WtfwinPC

Member
Aug 2, 2017
101
88
Am I dumb or are the offline modes just slow as all hell? like am I doing something wrong? or is my computer just doodoofard and AI is just beyond it
being able to run models depends on your graphics card, the more vram the better, im running a 2080ti so i have 11gigs of gddr which is just barely enough to run mid to low end size models, 7b models was pushing i think about 9 gigs on my vram when it has a download size of 7gig's, the 2gig model was taking about 4 gigs on vram. The bigger the language model like 13b + will require a minimum of 12gig's vram which is why so many people want a 4090 since thats 24gigs.

so yeah if you go over the size of your video card it'll bog down to where it takes a really long time for a reply.
 
Last edited:

CazaMilf

Newbie
Dec 11, 2020
45
70
Why this over SillyTavern?
SillyTavern even has plugins, making it extensible by the community.
Why should you use this and not Silly?
 

WtfwinPC

Member
Aug 2, 2017
101
88
im using it because getting silly up and running is kind of a pain, at least for those that prefer to run everything locally. I have to admit how ever im not big into running these language models since i really do need a better graphics card to make the chats interesting enough to play with.
 
  • Like
Reactions: ddtmm

WtfwinPC

Member
Aug 2, 2017
101
88
i'm not a coder, but wtf does it need the gpu so much?
That's because these models are what they call LLM's or large language models they load entirely into the gpu, the gpu is extreamly good at proccessing llm's and even designed to do just that with newer graphics cards, cpu's are extreamly slow and normal ram (ddr) is super slow so a gpu is often how its done. gddr is super fast and its placed around the gpu chip so even travel time is really low provinding great latency to performance leads to quick replys or photo processing on stable diffusion.

a lot of these models that are uncencered will work just like gpt 4 but well with out the cencer so you could ask it all kinds of real world questions on stuff and get a reply, we might be using it for sexy chat time but they really are powerful tools, they can code for you too.

If you wanted to run models for other then chatting then i would encurage you to try oobabooga
 
  • Like
Reactions: ddtmm

Cinder Fall

New Member
Jun 18, 2017
10
4
It doesn't matter which model I try, whenever I press Start AI it just says 'Waiting....' in yellow forever. Anyone know how to fix that? Thanks in advance.
 

WtfwinPC

Member
Aug 2, 2017
101
88
It doesn't matter which model I try, whenever I press Start AI it just says 'Waiting....' in yellow forever. Anyone know how to fix that? Thanks in advance.
i think the waiting means its done loading, try chatting or you can open task manager and go to performance tab, click on the gpu and see if the vram has filled up, once its loaded into ram its usually ready to go.
 

DatBoi6983

New Member
Jul 29, 2020
11
11
How do you enable offline mode? Couldn't find any settings for it in the app or config files, is there something else I need?
 

WtfwinPC

Member
Aug 2, 2017
101
88
How do you enable offline mode? Couldn't find any settings for it in the app or config files, is there something else I need?
start game, settings, ai tab, mode local ai, download a model, select that model in settings and click start ai.
 

WtfwinPC

Member
Aug 2, 2017
101
88
So far im not really liking this program, i like how easy it is to get running but the character will say she wants sex and then when i reply about it she is totally shocked as if she forgot what was being said.(almost like tokens don't work at all right now) so far this doesn't seem to work nearly as good as oogabooga.
 

Slayerz

Active Member
Aug 2, 2017
598
2,235
Darn this only use GGUF models? I was curious to see how it would use local models + chat. Will there be any update for safetensors model?

I guess I'll go back to local StableDiffusion / ComfyUI

1706123554331.png
 
  • Like
Reactions: rKnight

ddtmm

Member
Jun 5, 2020
234
223
thank
That's because these models are what they call LLM's or large language models they load entirely into the gpu, the gpu is extreamly good at proccessing llm's and even designed to do just that with newer graphics cards, cpu's are extreamly slow and normal ram (ddr) is super slow so a gpu is often how its done. gddr is super fast and its placed around the gpu chip so even travel time is really low provinding great latency to performance leads to quick replys or photo processing on stable diffusion.

a lot of these models that are uncencered will work just like gpt 4 but well with out the cencer so you could ask it all kinds of real world questions on stuff and get a reply, we might be using it for sexy chat time but they really are powerful tools, they can code for you too.

If you wanted to run models for other then chatting then i would encurage you to try oobabooga
thanks bro, much appreciate a real answer. i half expected to get a lot of trolls, but nobody did! lol
 
  • Like
Reactions: Kalvinis

Cinder Fall

New Member
Jun 18, 2017
10
4
i think the waiting means its done loading, try chatting or you can open task manager and go to performance tab, click on the gpu and see if the vram has filled up, once its loaded into ram its usually ready to go.
No dice. Sending a message just gives me "LOCAL AI ERROR: Cannot connect to Destination host."
 

Oldmike

New Member
Dec 19, 2017
4
1
I need help trying to figure out how to get the img generation to work. I have the --api set up. I have stable diffusion running. the link is in the box. I still get the img error: "detail not found"
 
  • Like
Reactions: horseradish1990

j0xssdf3

Member
Jul 24, 2019
160
222
Darn this only use GGUF models? I was curious to see how it would use local models + chat. Will there be any update for safetensors model?

I guess I'll go back to local StableDiffusion / ComfyUI

View attachment 3292234
Well, thats just the images. You can also use Faraday to make sexual rp scenarios. The way you make them heavily depends on the model you use, but I kinda formed a more optimized version of W++ that gave semi-better results than normal W++, alichat, and the third one that just lists everything. It cant generate images, but when you make it right, it can sure as hell do text-based ERP.

You don't have permission to view the spoiler content. Log in or register now.
 
  • Thinking Face
Reactions: Slayerz

vampik

New Member
Jan 7, 2019
10
7
No dice. Sending a message just gives me "LOCAL AI ERROR: Cannot connect to Destination host."
I have the same problem. I checked the task manager and saw that when I tried to start the GPT model, it included three more tasks, Kobolt something there, something in the cmd line and one more. And when these processes are turned on, after a couple of minutes the model is loaded and the word “active” appears instead of waiting. But it also happens that when trying to launch the model, these three applications do not start, which is why the wait for the model to launch can take forever (I checked, nothing loaded in eight hours of sleep). I have not yet figured out how and why these applications stop opening, but I decided to share my observations, maybe someone more experienced in these matters can suggest a solution. (Usually, if I re-unzip the game into a new folder once, I can connect the model, but after rebooting everything breaks again. I will continue my research) p.s. I don’t speak English well, but I hope you understand me.
 
  • Like
Reactions: Cinder Fall
3.30 star(s) 6 Votes