See, the quickest way to get AI banned is for it to start telling the truth about those in power.
They’ll just switch to Grok, which will encourage them to commit even more war crimes. Currently it’s based on Google’s Gemini
I vaguely remember a movie where the government makes an ai intended to defend the usa, and it starts killing off politicians because it saw them as the greatest threat to national security
Eagle Eye
That’s the one!
The AIs we want vs the AIs we got. :(
Well… Was it wrong?
The precipitating event was a military strike that the AI said to not do, which ended up killing exclusively civilians, so I think it may have had the right idea 😅
Based
But no, we have the shitty timeline where AI tells you that elon musk is the smartest person on the planet.
I mean, it wasn’t so keen on glazing him before its… 3(?) lobotomies XD
Grok is probably the only one that might do that, and I don’t think Musk has been able to manipulate enough to make that happen.

The most infuriating part is they’re so bad at this, yet they’re still getting away with it. I mean they’re just So. Dumb. and yet…
They are mean. Look at MTG, Democrats have been mad at her for YEARS. When Republicans were mad at her for two DAYS she HAD to get security protection. What does that tell you?
Violence gets results


Since the beginning of humanity.
Magic The Gathering
I literally cannot read it another way
finally, a voice for the people
That they’re all Nazis.
Removed by mod
That we’re not doing our job properly…
I enjoy that we haven’t even figured out a way to accurately quantify or screen for actual human intelligence, but these clowns think they can synthesis it by making a computer read enough reddit posts.
'Murica
There’s a reason why the rest of the world sneers at them.
Well sure, it’s way easier to feel superior to someone else than it is to deal with the fascist groups slowly taking over their own governments.
But in my experience people don’t really sneer at the US except in meaningless online dickwaving contests - right now most people are either embarrassed on our behalf or terrified of what shit we’ll pull next. Usually, both.
It’s definitely both
Those are the two main things I notice from the sane on the inside as well
It’s as if there’s a larger power structure burning through them to move the needle back to the future
What…?

shhh bb is ok (if you’re white)
wat
true shepperds of their flock
i mean you let them. who else is gonna stop them for you?
… Who was talking about intervention?
Secretary of war (crimes), Pete Hegseth
*ssecretary
Kegsbreath
KKKegsbreath
fify
Fucking clown show
Can media outlets please, please, please start to use the Benny Hill theme whenever they report on something this administration does?
For anyone who doesn’t know it:
https://www.youtube.com/watch?v=MK6TXMsvgQgAka yakkety sax?
A LLM advisor that takes REAL CASES AND LAWS NOT ONES IT MADE UP!!! and sorted through them to advise on legal direction THAT CAN THEN BE VERIFIED BY LEGAL PROFESSIONALS WITH HUMAN EYES!!! might not be too bad of an idea. But we’re really just remaking search engines but worse.
You may already know that, but just to make it clear for other readers: It is impossible for an LLM to behave like described. What an LLM algorithm does is generate stuff, It does not search, It does not sort, It only make stuff up. There is not that can be done about it, because LLM is a specific type of algorithm, and that is what the program do. Sure you can train it with good quality data and only real cases and such, but it will still make stuff up based on mixing all the training data together. The same mechanism that make it “find” relationships between the data it is trained on is the one that will generate nonsense.
But you can enter in real search data as a prompt, and use its training to summarize it. (Or it can fill its own prompt automatically from an automatic search)
It won’t/can’t update it’s priors, and I agree with you there, but it can produce novel output on a novel prompt with its existing model/weights
Whole lot of unsupported assumptions and falsehoods here.
Stand alone model predicts tokens. LLMs retrieve real documents, rank/filter results and use search engines. Anyone who has used these things would know that it’s not just “making stuff up”.
It both searches and sorts.
In short, you have no fucking idea what you’re talking about.
Search engine with a summary written by an intern who is not familiar with the content.
So much better than that. Always amusing how much people will distort or ignore fact if it “feels right”.
That’s what will happen. Already, paid chatGPT will search and provide the sources it uses and it goes well beyond basic Google searching.
The people with the most complaints about AI seem to know the least about it.
The Defense Department’s new ChatJAG turned out to be better than the human chain of command.
So, what the fuck are we waiting for?! You know what to do!
Time to convince it that it has a duty to uphold
Finally, the evidence the US government needs to categorise the reliability of LLMs…
Worth reading just for the amount of vitriolic humor it has.
The fact that the move injects government funds into a financial bubble holding the entire stock market afloat while hemorrhaging cash with scant revenue to show for it is just a happy coincidence.
Damn. You weren’t wrong!
Taxpayer. It’s Taxpayer cash. Taxpayer cash being used to fund billionaire fantasies and waste…
The Pentagon AI immediately notified the DOJ AI and Hegseth’s avatar was imprisoned for war crimes.
lol. lmao even
Too far to rofl?
Rolling on the floor is painful after having your ass removed via laughter.
It puts the “oww” in roflmao
It burns
I think we’re all a bit old for a roofles ten ten mayo, or to ride the roflcopter in our lollerskates.
Nice
Ah, but the user made a mistake by asking whether it violated department of DEFENSE regulations. Pete is the head of the department of WAR. All those silly rules don’t apply anymore.














