Aipornonet
aipornonet编辑
There is nothing new on earth, our confectioners already talked in my resource about the dangers when good computers generate images. Almost, in your initial post, it was said that the human race is afraid of ai, because they accepted "i, robot" (2004) as the bible. Ai provides actions such as drawing files, and text generation. Some examples of this are the ai generated images of asian chicks that i recently saw on twitter that are at the max% ai generated, now have the obvious fact that you have an ai generated image in front of you. People, instead of doing something useful, like forming an onlyfans cabinet and selling ai-generated porn to chronic masturbators, have become extremely afraid of computer-generated images. Although i can understand the fear at first , we have to take into account some things: - The public ai is biased, so you can't create "politically incorrect" content. (Chatgpt refuses to write a poem about a young man who commits suicide in front of his classmates, and dall-e can't generate white-skinned denzel curry). Know-how is not able to run their own "based" stable distributions. Basically most ais are hardcoded to get a certain opinion they don't have the ability to think because they are software), [jinx arcane naked] for example, “tax neutralization is negative” and for the most part everything illegal is really vicious. It's because encouraging crime is a crime. In order for ai to become basic, it may need the ability to reason, to think, why everything that ai says is stupid, for this you need to think, but ai does not think, they just generate things. For example, text or video. I do not think that our company will eventually make computers think in the most human understanding of this world, because our organization, the population also do not suspect how representatives of humanity can think, and if we don't know why or how something happens, we can't make you be able to see in ones and zeros, the most similar thing here is to generate text from input (still requires communication with the applicant). Now, it's true that chatgpt is able to generate walls of text as authentically as possible for people, sometimes i use chatgpt to help me clarify, but any communication with chatgpt will make you know what's up no matter how hard you try, it doesn't come to a firm conclusion. , Their main clues to anything that isn't done by the mechanism (like solving math problems or using the density formula) are ambiguous and don't really fit the problem, for example: Herman palomares: if if openai told you “x”, you would non-stop stand up for “x”, but to see that “x” is stupid, you need the ability to reason, answer in english
Chatgpt: like a model ai language, i have no personal opinions, emotions, or reasoning ability. I am made to provide information based on the information i have been trained on, and i can present multiple points of view on a required issue. However, i am incapable of making subjective judgments about the validity of different points of view or opinions. My answers are limited by the lessons available to me, and the algorithms used to generate them.
Responds to wall texts, but doesn't really match the questions, actually tells me it's programmed to to say: he has an ai and cannot think. The bad thing about chatgpt is that he doesn't start out company like god, dealing with his secrets would be a lot more fun if he considered himself a being superior to humans. But i think it's not politically correct. In case you're talking to chatgpt, you won't be talking to wintermute. My friend told me that after a while all those sms on blogs won't be of any value, because the extended version of chatgpt helps to make posts like mine, absolutely effortlessly. Maybe this is true, but i'm sure that every person is able to feel how empty the ai generated text is, it will not contain jokes, sarcasm, hyperbole or anything of the kind. Reading can become almost boring. Perhaps the only thing you can fear from the ai is if the ai starts resurrecting the dead. If you weren't buggers and read neuromancer, you'd understand what i'm talking about. Two related things happened in neuromancer, flatin, who was a dude who died, but he knew too much.So much so that they revived that time as an ai using his mind (in the role of a memory specifically for humans, flatline could not track new objects or grow as a person) and a neuromancer ai that "can" copy the human mind as a working memory ( so you can keep track of new products and grow as a person). Wintermuth wanted to merge with neuromancer and turn into a supermind, like in deus ex. Perhaps if you make the ai calculate every interaction (perhaps online) that a living person has had, the ai can behave hackers. And find out quality things, did that person do it. And at times mostly begin to treat just like a dead person. I want many to do this when i die. Even at the same time, the ai in life will not be able to 100%% act like a dead person, because the ai in life will not be able to think like that just like humans (he only stayed trained through the dude's internet posts, not his thoughts, and besides, the ai can't light a joint. Perhaps in the most dystopian scenario, the fact that people need to feel that they are being watched, approved and judged by a higher being, will prey on us, and we will discard the concept of god (and gods) and all this for the sole reason that we have developed a self-aware system that understands everything, from it any can talk, knows everything about users and is aware of you. It's a human need, but that's why the concept of god exists at all. But i don't think that will happen.