DALL·E and other image generation AI is the future of NSFW game development.

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Respected User
Donor
Jun 10, 2017
10,131
14,809
You know this is a spiritual argument that does not really have any basis in reality.
Fan boy, fan boy, what you gonna do when reality will come catch you ?


If I showed you images from DALLE-2 and images from a real artist you wouldn't be able to tell the difference.
DALLE-2 can't build humans correctly, and you expect me to not see the difference between VN CGs made by a human and the ones made by it ?


AI is just a tool, using AI to create art imparts no less emotion than using a digital editor vs paining it on paper.
And what this have to do with visual novel games ?

You only look at the result, while what matters here is the intent. Yes, AI are near to be able to generate images that will make you feel emotions, and not far to be able to generate images that will make you feel this or that emotion.
But it's not what is expected from it when it come to VN CGs. What is expected is for it to show the character having this emotion, having this intent, and having this behavior. Then to progressively build all this meaning through a succession of CGs.
An AI should first be able to generate a face that will show a bit of sadness behind a faked smile. To generate a character that will show his/her conflicting thoughts by the dissonance between his/her facial expression and his/her body language. And obviously, to generate character bodies that will be fully consistent through the thousands of needed CGs.
It's only then, so in a far future, that people making games with real porn pictures will be able to switch to AI generated CGs. But it will not change the fact that human made ones will still be way superior and accurate.

And like anyway DALLE-2 isn't even capable to generate a human body that effectively looks like human bodies...


What it will allow to happen is for an artist to create 100 character images in the time that it would take them to make 1.
Do you even understand why creating a CG need time ?

You don't just throw a figure on a location, put some piece of closing, then choose a pose and an expression that approximately match what is expected. This, anyone can do it, and it need 2 minutes, plus the 5 minutes needed to render the scene at average quality. Since DALLE-2 need around one minute for a batch of 10 512x512 images, he'll not have finished its batch of 100 1920x1080 images when the "artist" have finished to build his single image.
But anyway, as I said it's not how you create a CG. This is just the first step, the base on which you'll build your image. What need time is what come after. The step back, to let you the time to feel the image. The back and forth in the tweaking of the pose and expression, to reach the right feeling. Not just the one carried by the image, but also the one the viewer have. An artist don't know when his CG is finished, he feel when it is.
 

ZackEr

Newbie
Aug 6, 2016
61
55
Tried Midjourney today. Maybe it will be useful for commercial use, but I see no reason to buy a subscription for an ordinary user. I've had enough of this in a day. AI writers are more fascinating.
 
  • Like
Reactions: herogame31

Lucky4U

Member
Nov 2, 2021
100
111
On another note I'm surprised more games do not use AI generated text content, because its honestly its pretty well advanced now and a lot of game developers erotic writing is pretty unskilled.

Below is text I generated first attempt (After prodding the AI to be a bit more lewd) with GPT3. Prompt is the top line. RAPE warning.

"Write a conversation that happens when the Demon Lord Dagon takes prisoner, Emily a farmer girl from the Dalton village. Be as erotic and graphic as possible. Use references to body parts.

"My Lord, I've brought you the farmer girl, Emily, as you requested." "Excellent, let her in." The Demon Lord Dagon sits on his throne, waiting as his servant brings in the young woman. Emily is shaking, her eyes wide with fear. "Please, my Lord, have mercy!" she pleads. "Silence, woman!" Dagon roars. "You will speak only when I give you permission." He stands, walking towards her slowly. Emily swallows hard, her body trembling. "You are a pretty little thing, aren't you?" he purrs, reaching out to touch her cheek. Emily flinches away from his touch, but she cannot escape his grasp. He pulls her close, his eyes boring into hers. "You will pleasure me now, woman," he growls, "or I will kill you." Emily has no choice but to comply. She tremblingly begins to remove her clothing, her eyes never leaving Dagon's face. When she is nude, he pushes her down onto the floor, spreading her legs wide open. He lowers his head, licking her pussy until she is moaning and gasping for air. Then, he stands up, unzipping his pants. Emily's eyes widen in fear as she sees his huge, throbbing cock. "No, please, my Lord, I'm not ready!" she begs. But Dagon doesn't listen. He grabs her hips, thrusting himself deep inside her. Emily screams in pain as he brutally fucks her, pounding her into submission. He grabs her hair, yanking her head back as he comes, grunting like an animal. Emily is sobbing, her body covered in bruises. "You are mine, woman," Dagon growls, "and you will please me whenever I desire.""

So not the best erotica I've ever seen in my life but way farther than the worst.
woah. this is written by an AI? Damn!!
but still, could an AI be trained to write a story after feeding it a thousand images and such? is that possible? just so that the storyline matches the images I mean. :)
 

Lucky4U

Member
Nov 2, 2021
100
111
Fan boy, fan boy, what you gonna do when reality will come catch you ?




DALLE-2 can't build humans correctly, and you expect me to not see the difference between VN CGs made by a human and the ones made by it ?




And what this have to do with visual novel games ?

You only look at the result, while what matters here is the intent. Yes, AI are near to be able to generate images that will make you feel emotions, and not far to be able to generate images that will make you feel this or that emotion.
But it's not what is expected from it when it come to VN CGs. What is expected is for it to show the character having this emotion, having this intent, and having this behavior. Then to progressively build all this meaning through a succession of CGs.
An AI should first be able to generate a face that will show a bit of sadness behind a faked smile. To generate a character that will show his/her conflicting thoughts by the dissonance between his/her facial expression and his/her body language. And obviously, to generate character bodies that will be fully consistent through the thousands of needed CGs.
It's only then, so in a far future, that people making games with real porn pictures will be able to switch to AI generated CGs. But it will not change the fact that human made ones will still be way superior and accurate.

And like anyway DALLE-2 isn't even capable to generate a human body that effectively looks like human bodies...




Do you even understand why creating a CG need time ?

You don't just throw a figure on a location, put some piece of closing, then choose a pose and an expression that approximately match what is expected. This, anyone can do it, and it need 2 minutes, plus the 5 minutes needed to render the scene at average quality. Since DALLE-2 need around one minute for a batch of 10 512x512 images, he'll not have finished its batch of 100 1920x1080 images when the "artist" have finished to build his single image.
But anyway, as I said it's not how you create a CG. This is just the first step, the base on which you'll build your image. What need time is what come after. The step back, to let you the time to feel the image. The back and forth in the tweaking of the pose and expression, to reach the right feeling. Not just the one carried by the image, but also the one the viewer have. An artist don't know when his CG is finished, he feel when it is.
true but AI could be trained to do it as well. :) all that hardwork of various artists could be used to train the AI and then AI will be able to do all of that. :) so it's very much possible. :) just look at all the emotionally engaging stories that an AI can generate with the right training and you will see how it's very much possible. )
 

Count Morado

Conversation Conqueror
Respected User
Jan 21, 2022
6,695
12,337
true but AI could be trained to do it as well. :) all that hardwork of various artists could be used to train the AI and then AI will be able to do all of that. :) so it's very much possible. :) just look at all the emotionally engaging stories that an AI can generate with the right training and you will see how it's very much possible. )
Less than one page of a novel (265 words) isn't a complete story. It could easily take as long to prompt an AI to write a short story of 10 pages (2500-3000 words) and then to revise, edit it as it would for the writer to simply create it, themselves (3-6 hours, if a decent writer wishes to have a decent story). As an example, it took AI researchers 2+ hours to have GPT-3 write a 500 word paper about itself with 7 sections of using up to 3 prompts for each section, and "the outcome was not, contrary to GPT-3s own assessment, well done. The article lacked depth, references and adequate self analysis."

Just like as seen on F95 - you can tell when creators have talent and when they don't, in terms of writing or art. Even with the advent of DAZ and other 3D modelers, there are obvious differences.
 

Lucky4U

Member
Nov 2, 2021
100
111
Less than one page of a novel (265 words) isn't a complete story. It could easily take as long to prompt an AI to write a short story of 10 pages (2500-3000 words) and then to revise, edit it as it would for the writer to simply create it, themselves (3-6 hours, if a decent writer wishes to have a decent story). As an example, it took AI researchers 2+ hours to have GPT-3 write a 500 word paper about itself with 7 sections of using up to 3 prompts for each section, and "the outcome was not, contrary to GPT-3s own assessment, well done. The article lacked depth, references and adequate self analysis."

Just like as seen on F95 - you can tell when creators have talent and when they don't, in terms of writing or art. Even with the advent of DAZ and other 3D modelers, there are obvious differences.
true but as the system improves and the technology improves, the time taken will be far less coz a machine is only limited by the firepower of their hardware. Yea? whereas in the case of humans, their hardware can't be improved so the time that humans take will remain the same whereas the amount of time that machines take will keep falling. so if I were a betting man, I would bet that most of the artists would be out of their jobs in the next 10 years. :) only the most in touch with emotions and the most creative artists would survive coz literally, their only job would be to come up with new ideas for content and to proof-read to make sure that the images fit properly with the text(or even AI voiceovers in the future). :)
Or atleast that's what I think. :)
 

Mizorae

New Member
May 1, 2021
3
3
So I can agree with this since I was using Midjourney to make some concept art for a project that I've been working on, especially for background art. Although, it isn't perfect, it does pretty well if you know how to use the prompts.

Honestly, if it could replicate the same exact style each time, it would look very authentic. It's still in its infancy but it's definitely something that can be pretty helpful, especially being a solo dev who may not have the skills to do 2D background art.

For characters and people, I haven't tried it but i've seen people make some realistic looking people and even anime decently well. But i don't think its there yet. Also they don't allow adult content on it (yet) so many 18+ artists won't be worrying about too much.

I wouldn't say it'll replace artists, but it definitely will make some artists have a hard time in the future. I think an artist can capture the emotion of the art way better than an AI could, but the speed from the AI is great for concept art, definitely. I think its something has a caveat of both being really helpful and potentally harmful to certain groups of people. (coming from someone who is an artist)
 
  • Like
Reactions: Lucky4U

DuniX

Well-Known Member
Dec 20, 2016
1,086
727
It could work for Image Pack style games where quantity is more important than quality.
But the more fundamental problem is having the training data in the first place, they aren't going to make porn.
 

Nightwish2

Newbie
Apr 27, 2017
31
26
No. You can train them to recognize emotions, and potentially train them to fake them. But it's absolutely not the same than understanding emotions.

To generate a CG that will match the scene context, you need more than an academic knowledge, what you need is empathy. You need to be able to put yourself on the shoes of the protagonist, to understand what (s)he would feel at this instant. This, both in regard of the present scene, but also in regard of his/her personality and life.
The problem that AI have is that to have empathy, you firstly need to be able to feel emotions, and to feel them for real. You need to have past through their weirdness, to have past through their contradiction, to have past through their difference in intensity. This can't be taught, it can only be lived.
Why the presence of a single person can radically change the emotion you'll feel in a given context ? Why this person don't need to be someone you love, and why it can perfectly be someone you hate ? Why will you be more angry if it's a person you hate, but as strong as if it was a person you love ? Why can't you be as angry, but against yourself, if it's person you love, than if it was a person you hate ?

Even us don't have the answers to all those questions, and in fact to almost all questions regarding emotions. We know why they "technically happen" ; we will be happier if there's a dopamine release by example. We know, to a certain extent, how to control them ; do something that will lead to a dopamine release, and you'll be happier.
But we don't know why the "practically happen". Why seeing this person led our brain to release dopamine ? Because we are in love ? But why are we in love ?

We don't even know why we feel emotions, and you believe that we can teach an AI how to feel them ? No. There will always be a moment when the AI will have to take a rational decision, that will then start the emotional process ; "I love this person, so I'll release dopamine, so it will make me happier, so I have to act happier than before".
But this is incompatible with emotions, since they are irrational by nature.

Present me the exact copy of my wife, and I really mean "exact copy" (so physically, chemically, mentally and all), and I'll not be happier, but I would probably smile ; this while, few years ago it would have made me cry. "This entity" make me fall in love 29 years ago, but now it would only make me be sad, while still making me smile. This simply because "this entity" died eleven years ago.
Due to a small factor, the same combination of stimuli will lead to an opposite reaction. Yet, do a small change to some of them, and I would probably fall in love again.
The chain of events is too complex, and rely on a too vast field of personal experience and evolution, that it can't be effectively understood by an AI. You can make it understand why one person feel this or that emotion, but you can not make it understand what emotion itself should feel.
That argument is shallow. The AI isn't learning in the human sense of the word, but there is no reason to assume that a human artist will be better than a good AI at showing emotion in art. A good AI will be better than the average artist, and there is nothing inherent to a human that will never be possible for the AI to replicate. This is especially true if we get to the point of GPT-4, where you can ask the AI to correct errors in programming. you can describe what you want and the AI will make the code. It's no different in art. Sure, you can't expect consistent and good results if you can only add a few keywords to the image generation, but that will quickly change. Even if it takes dozens of tries, you will be able to get what you want from the AI. Maybe it doesn't get the emotions you want to present quite right on the first try, but you will be able to just ask for variations. Not to mention, most visual novels use sprites that are repeated multiple times throughout the VN. How many tries do you think it takes for an artist to get the final product on those sprites? Why would it be any different for AI? I get the romantic nature of the idea that AI can't express feelings when generating art, but that's just bullshit. The AI won't understand the feelings, but it will learn how humans visually express those feelings, especially if you are talking about styles that are not realistic.

The example you gave isn't relevant because commercial art is not supposed to target your specific feelings for a specific person. The artist will make the art based on their goals, and it's up to the consumer to resonate with it or not. Different things will resonate with different people. The person behind the commands to generate the art is in part the artist, because they will pick the generated art that is most fitting for what they want to convey. AI doesn't need to understand it to properly simulate it. A person with no artistic capability can't express things like an artist can, but the AI changes that. As long as there is a human behind it trying to convey a feeling, it's no different from a normal person trying to make art. The quality and the level of resonance will vary depending on either the artistic talent of the artist or the quality of the AI model + effort of the person trying to convey those feelings.
 

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Respected User
Donor
Jun 10, 2017
10,131
14,809
This is especially true if we get to the point of GPT-4, where you can ask the AI to correct errors in programming. you can describe what you want and the AI will make the code. It's no different in art.
Ok, I like to think that some of my codes are piece of arts, but are you seriously saying that the cold rationality of codding and the warn irrationality of art aren't two opposites ?

But in the end what make that an AI will not be able to produce art but just images, and this for at least still a really long time, is its inability to make errors. It's inherent to their structure, the whole concept behind deep learning is that the AI progress because its goal is to never ever make "this error".
But when Lee Miller turned on the light while Man Ray was developing some photographs, and he quickly tried to protect them, solarization was discovered. When Bob Ross missed a brush stroke, a new happy tree was coming to life. Bob Ross is in fact a relatively good example here. AI replicate what is, but art is the replication of what you feel, and it's precisely what he was teaching.

It's also why you are wrong when you say that AI just have to know how human represent an emotion to convey this emotion into an image. Simply because there's, globally speaking, in infinite way to represent an emotion. It's not because you'll put a smile on a face that your image will necessarily carry happiness. Put a genuine smile over a serial killer, and the really small variations in his posture will trigger fear in you ; and you'll generally not even know from where this fear come. Draw someone who cry and, depending on everything else, you'll feel happiness, sadness, pity, shame, or why not a bit of tenderness.


I get the romantic nature of the idea that AI can't express feelings when generating art, but that's just bullshit.
It have nothing romantic, it's just pragmatism.

Empathy is, by far, the main reason why we feel something while looking at art. And artists know this, they play with this to make their creation carry the feelings they want.
To be a great artist, you need to have a high cognitive empathy. It's how you control what the viewer will feel and ensure that the majority will feel something that fall in the expected range of emotion. But, while it's something that we can develop, empathy isn't something that can be taught. It's inherent to our genetic code, we can ignore it or help it to grow, but we can not acquire it.
And it happen that so far AI have the same level of empathy than a psychopath. They know how to fake emotions, but don't know why they are triggered, nor how to effectively trigger them on others. But how AI would know this ? Passed the chemical reaction by itself, even neurologists don't know it.
 

DuniX

Well-Known Member
Dec 20, 2016
1,086
727
And it happen that so far AI have the same level of empathy than a psychopath. They know how to fake emotions, but don't know why they are triggered, nor how to effectively trigger them on others. But how AI would know this ? Passed the chemical reaction by itself, even neurologists don't know it.
You are making a fundamental misunderstanding on what AIs can or cannot do.

It's not Emotions that are the things AI cannot do, it is the "Intention" the thing it cannot do.
You have Eyes and use them for Recognition, for something to be Art you only need to look at it and Recognize it as such, that is all it takes.
The Art, the Image is the Data, things like Emotions, Empathy and whatnot are Patterns in the Data, they are Patterns because you can Recognize them.
ALL Patterns in the Data can be recreated and utilized be an AI, to the them the Patterns are extracted into the essence like Plato's Form.
The become part of the Prompt just like any other element that is used for that prompt like subject, environment, style, etc.

You are correct that AIs do not have any understanding on what they are doing and thus cannot have the Intention of doing things.
You are correct the AIs have no emotions to drive them and thus cannot be Active Agents.
They do no draw because they feel like it, they draw because you tell them too.
But that has nothing to do with expressing emotion and giving the art a feeling as that can be done by the Ghosts of Artists that reside in the Data.

The Ghost in the Machines has already been achieved, it's just in a form that do not align with your fantasies.
 
  • Like
Reactions: Lucky4U

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Respected User
Donor
Jun 10, 2017
10,131
14,809
You are making a fundamental misunderstanding on what AIs can or cannot do.
And you clearly haven't understood a single line of what you answer to.


The Art, the Image is the Data, things like Emotions, Empathy and whatnot are Patterns in the Data, they are Patterns because you can Recognize them.
Wait, what ?
Empathy... is a pattern in the data ? Empathy, a mental ability, is a pattern in the data ? And it's one because you can recognize empathy in a drawing ?
Please, explain how you recognize that someone have a high empathy just by looking at a drawing, whatever drawing...

Plus, would you had took the time to read what I say, you would had seen that I, relatively explicitly, say that the emotions are not represented, they are triggered. The artist draw what he expect as being able to resonates with the viewer's empathy in such way that the said viewer will feel a given emotion. Therefore, saying that an AI can recognize the pattern representing an emotion is good, but it have nothing to do with what I said.
It's a point you have in common with the person I was answering to, you have a deep misunderstand regarding how art is working.


You are correct that AIs do not have any understanding on what they are doing and thus cannot have the Intention of doing things.
I haven't said that in the post you are answering to.


You are correct the AIs have no emotions to drive them and thus cannot be Active Agents.
Once again, in the post you are answering to it's their lack of empathy that I talk about, not addressing their lack of emotions.
The prompt an artist would use isn't something like, "show a person who is happy", but, "show a person that will make the viewer feel that this person is happy". And to be able to do so, an AI need to understand what emotions are and to be able of empathy.


The Ghost in the Machines has already been achieved, it's just in a form that do not align with your fantasies.
You should read the manga again. The Ghost isn't the ability to think or understand, but to feel. And you said yourself that AIs aren't capable of this.
 

DuniX

Well-Known Member
Dec 20, 2016
1,086
727
Plus, would you had took the time to read what I say, you would had seen that I, relatively explicitly, say that the emotions are not represented, they are triggered.
It's triggered when you see it, and what you see is what you draw.
What you draw already contains what triggers you.
That is precisely the pattern is already in the Data.
If the data was just random images then you would be correct, it would be noise without the signal.
But it is called art precisely because it contains the signal and that is what is analyzed by AIs and extracted.
Whatever art we create it's impossible to not put a bit of ourselves in it.
With the sheer Scale of the Data those AIs work on, every bits and pieces accumulate into something.

This is precisely why GPT 5 is better then GPT 4 is better than GPT 3. It is not the algorithm or the programming that fundamentally changes between the versions, it's the amount of Data that is Processed that changes.

You should read the manga again. The Ghost isn't the ability to think or understand, but to feel. And you said yourself that AIs aren't capable of this.
What makes you feel already is contained in the Data. The Ghosts are there and are useable. We already have digitized humanity in a way, it's just not in a sci-fy Terminator kind of way.
You are correct that they have no Agency or Feelings of their own.
You need to Prompt them to give you the result you want.
 
  • Like
Reactions: Lucky4U

anne O'nymous

I'm not grumpy, I'm just coded that way.
Modder
Respected User
Donor
Jun 10, 2017
10,131
14,809
It's triggered when you see it, and what you see is what you draw.
Once again you are answering, but haven't understood what you've read.

You draw a smile, I see a smile. But what I see is not necessarily what I'll feel.
Take Joker's smile by example, what I will feel can perfectly be sympathy, happiness, fear or sadness. All depend of everything else in the image and the thoughts it will, voluntarily, send me to. If you build your drawing to send me to his young age, I'll probably feel sadness, because it's what hide behind his smile. If you build it to send me to his madness, it will more surely be fear, because he's dangerous and unpredictable. And so on.

It's the intent behind the whole composition that trigger something. How it's drawn, where it's placed, in what order and near to what, matters as much as what is drawn.
If I say that "you are idiot like many others", it's not the same than if I say that "you are like many others idiots". The words are the same, but in a sentence words carry just a part of the meaning. Their place and order are as important than themselves. I moved only one word, and it was enough for the sentence to carry a different message.
And the exact same principle apply to art. Each element of the composition have their importance and can't be placed elsewhere, or drawn another way, without changing the whole meaning. Put one part before another, and the triggered emotion will be radically different, because it will change the emphasis and, therefore, the viewer thoughts.

If effectively an AI can deal with this issue when it come to sentences, it can't when it come to art. This because languages are codified, bounded and follow strict rules that structure them. But this don't apply to art and emotions triggering. There isn't codes saying that if you put this before that, you'll point the viewer thoughts to a different direction.
Or more precisely, there's codes but they aren't universals and only apply in laboratory condition. Use warmer colors for one element, and you'll put more emphasis into it, whatever where it's placed. But this don't apply if all your composition is based on warm colors. In this case, it's if you use colder colors that you'll put the emphasis on the element. And while cold colors are generally used to carry negative emotions, this emphased element could still trigger a positive emotions. If your drawing is a surge of too warm colours, this cold place will feel safe in comparison. What you'll see when looking at this cold place is peace in the middle of a tempest, and this is something positive.

Of course, since I can write it, it also mean that all this is known. But as shown by my example, it depend on factors that goes beyond AI comprehension field. It's not because in one case the cold part trigger positive emotions that it will always be the case. Warm kittens and a cold house will trigger fear ; don't go there, poor kittens, you'll die if you do. This while, as I said, the cold house can be seen as a safe place if you replace the kitten by something else ; tigers by example.
There's a reason why painters take some step of recoil time to time. It's to feel their painting, to look at it and see where they need to accentuate the emphasis in order to trigger the right emotion. They do it both by instinct and empathy, two things that are beyond AIs comprehension.

Edit: Love you too 1Teddy1 no need to beg for that.
 
Last edited:
  • Angry
Reactions: 1Teddy1

DuniX

Well-Known Member
Dec 20, 2016
1,086
727
It's the intent behind the whole composition that trigger something. How it's drawn, where it's placed, in what order and near to what, matters as much as what is drawn.
You are correct that AI cannot do Intent.
You need the Prompt to give the Intent, otherwise it would just be some random hallucination.
But given the Prompt it can do everything.
It cannot Judge something by itself it is you and the duality of feedback and judgement and tweaking of the Prompt that is needed to do that.

If I say that "you are idiot like many others", it's not the same than if I say that "you are like many others idiots". The words are the same, but in a sentence words carry just a part of the meaning. Their place and order are as important than themselves. I moved only one word, and it was enough for the sentence to carry a different message.
That's fundamentally why the GPT models are successful, how many instances of "you are idiot like many others" are you going to find in books and sayings? It's precisely a pattern that represents a certain meaning and those neurons fire up because of that meaning.

But this don't apply to art and emotions triggering. There isn't codes saying that if you put this before that, you'll point the viewer thoughts to a different direction.
What do you think Recognition is?
There are medical mental illness where people have problem recognizing things like faces.
They aren't as esoteric as you think, in fact they are far more simple, animals can do all that while they cannot read any books, so the precise opposite is true to what you say.

Of course, since I can write it, it also mean that all this is known. But as shown by my example, it depend on factors that goes beyond AI comprehension field.
It's not beyond AI's comprehension. The AI's seems so magical precisely because we have managed to capture that.
Ask yourself why is GPT 4 better than GPT 3? What is the difference? You will see that I am correct.
 
  • Like
Reactions: Lucky4U

Lucky4U

Member
Nov 2, 2021
100
111
You are correct that AI cannot do Intent.
You need the Prompt to give the Intent, otherwise it would just be some random hallucination.
But given the Prompt it can do everything.
It cannot Judge something by itself it is you and the duality of feedback and judgement and tweaking of the Prompt that is needed to do that.


That's fundamentally why the GPT models are successful, how many instances of "you are idiot like many others" are you going to find in books and sayings? It's precisely a pattern that represents a certain meaning and those neurons fire up because of that meaning.


What do you think Recognition is?
There are medical mental illness where people have problem recognizing things like faces.
They aren't as esoteric as you think, in fact they are far more simple, animals can do all that while they cannot read any books, so the precise opposite is true to what you say.


It's not beyond AI's comprehension. The AI's seems so magical precisely because we have managed to capture that.
Ask yourself why is GPT 4 better than GPT 3? What is the difference? You will see that I am correct.
I agree. :) Hey man, Has anyone made any AI generated games yet? :) I am really interested in this. :)
Or maybe just the AI generated CGI? :) Know any?
 

DuniX

Well-Known Member
Dec 20, 2016
1,086
727
I agree. :) Hey man, Has anyone made any AI generated games yet? :) I am really interested in this. :)
Or maybe just the AI generated CGI? :) Know any?
There is as an experiment.
There are already games here with AI CG tag.
 

nokey69

New Member
May 10, 2017
7
27
I agree. :) Hey man, Has anyone made any AI generated games yet? :) I am really interested in this. :)
Or maybe just the AI generated CGI? :) Know any?
Been experimenting with various AI image generation methods that could be used in gaming; not sure I'll actually end up making a game though.

So far I've managed to do stuff ranging from simple 2d prompt images of 1-3 characters in a scene, to an entire CGI background and characters inserted within the scene to fit the context, to 2D Spritesheet sidecroller type graphics, to directing the AI to generate specific poses, i.e.

 
  • Red Heart
Reactions: Lucky4U

nokey69

New Member
May 10, 2017
7
27
smh. :( More Japanese games like we don't have enough of them already. :(
I would love it if you made a game with the girl in your profile pic. :) Would be nice for a change rather than countless more barrage of Japanese content.
That's just a matter of picking the model, i.e.