

Because the most upvoted one thinks NATO is a good thing, but since one unreliable country cannot be kicked out, it should be replaced with another alliance with slight changes. This comment just says NATO BAD.


Because the most upvoted one thinks NATO is a good thing, but since one unreliable country cannot be kicked out, it should be replaced with another alliance with slight changes. This comment just says NATO BAD.


As someone from Czech Republic, I am not surprised. There are sometimes huge differences between country names in czech and English. And the closer the country is, the bigger the difference.
For the German speaking countries eng - ger - cze:
Other examples (eng - cze):


I have to disagree. The only reason computer expanded your mind is because you were curious about it. And that is still true even with AI. Just for example, people doesn’t have to learn or solve derivations or complex equations, Wolfram Alpha can do that for them. Also, learning grammar isn’t that important with spell-checkers. Or instead of learning foreign languages you can just use automatic translators. Just like computers or internet, AI makes it easier for people, who doesn’t want to learn. But it also makes learning easier. Instead of going through blog posts, you have the information summarized in one place (although maybe incorrect). And you can even ask AI questions to better understand or debate the topic, instantly and without being ridiculed by other people for stupid questions.
And to just annoy some people, I am programmer, but I like much more the theory then coding. So for example I refuse to remember the whole numpy library. But with AI, I do not have to, it just recommends me the right weird fuction that does the same as my own ugly code. Of course I check the code and understand every line so I can do it myself next time.


I usually use randomly generated passwords of the maximal supported length (or the maximum 100). I would definitely forget even one of these. Password Manager has also great auto-fill feature for fast and simple login in browser or PC or phone apps, and automatically copies 2FA codes.


Okay, it is easy to see -> a lot of people point it out


I guess because it is easy to see that living painting and conscious LLMs are incomparable. One is physically impossible, the other is more philosophical and speculative, maybe even undecidable.


I would say that artificial neuron nets try to mimic real neurons, they were inspired by them. But there are a lot of differences between them. I studied artificial intelligence, so my experience is mainly with the artificial neurons. But from my limited knowledge, the real neural nets have no structure (like layers), have binary inputs and outputs (when activity on the inputs are large enough, the neuron emits a signal) and every day bunch of neurons die, which leads to restructurizing of the network. Also from what I remember, short time memory is “saved” as cycling neural activities and during sleep the information is stored into neurons proteins and become long time memory. However, modern artificial networks (modern means last 40 years) are usually organized into layers whose struktuře is fixed and have inputs and outputs as real numbers. It’s true that the context is needed for modern LLMs that use decoder-only architecture (which are most of them). But the context can be viewed as a memory itself in the process of generation since for each new token new neurons are added to the net. There are also techniques like Low Rank Adaptation (LoRA) that are used for quick and effective fine-tuning of neural networks. I think these techniques are used to train the specialized agents or to specialize a chatbot for a user. I even used this tevhnique to train my own LLM from an existing one that I wouldn’t be able to train otherwise due to GPU memory constraints.
TLDR: I think the difference between real and artificial neuron nets is too huge for memory to have the same meaning in both.


As I said in another comment, doesn’t the ChatGPT app allow a live converation with a user? I do not use it, but I saw that it can continuously listen to the user and react live to it, even use a camera. There is a problem with the growing context, since this limited. But I saw in some places that the context can be replaced with LLM generated chat summary. So I do not think the continuity is a obstacle. Unless you want unlimited history with all details preserved.


I saw several papers about LLM safety (for example Alignment faking in large language models) that show some “hidden” self preserving behaviour in LLMs. But as I know, no-one understands whether this behaviour is just trained and does mean nothing or it emerged from the model complexity.
Also, I do not use the ChatGPT app, but doesn’t it have a live chat feature where it continuously listens to user and reacts to it? It can even take pictures. So the continuity isn’t a huge problem. And LLMs are able to interact with tools, so creating a tool that moves a robot hand shouldn’t be that complicated.


I meant alive in the context of the post. Everyone knows what painting becoming alive means.


Okay, so by my understanding on what you’ve said, LLM could be considered conscious, since studies pointed to their resilience to changes and attempts to preserve themselves?


Except … being alive is well defined. But consciousness is not. And we do not even know where it comes from.


Gray hair have better signal to heaven


Isn’t McDonald’s in general a luxury you shouldn’t have?
I wouldn’t say it’s that bad. NATO is only defensive, so other members have no obligation to join US wars. I admit, NATO conditions can be used to pressure members, but since everyone is hating attack on Iran or Venezuela, the influence isn’t that big. And sometimes the members fight even against each other in proxy wars, for example US vs Turkey in Syria.