I hate everything about this: the lack of transparency, the lack of communication, the chaotic back and forth. We don’t know now if the company is now in a better position or worse.
I know it leaves me feeling pretty sick and untrusting about it considering the importance and potential disruptiveness (perhaps extreme) of AI in the coming years.
Given the rumors he was fired based on undisclosed usage of some foreign data scraping company’s data, it ain’t looking good.
Now that there’s big money involved, screw ethics. We don’t care how the training data was acquired.
Now that there’s big money involved, screw ethics. We don’t care how the training data was acquired.
I dont care about ethics here, if the money would be excluded as well.
IF they would live up to their goals they settled for its fine.
But its similar to google, back in the days, with “dont be evil”.
Can I find out more about these rumors somewhere?
I’ve tried to find it but I can’t seem to find it. There’s been a thread on Lemmy somewhere about it that linked to a thread on Blind where someone claiming to be working at OpenAI having heard that from the board.
But, it’s ultimately just rumors, we don’t know for sure. But it was at least pretty plausible and what I would expect the board of a very successful AI company to fire the CEO for, since the company is obviously doing really well right now.
That’s not how rumors work.
What? Rumors work by people discussing them.
I didn’t ask for proof.
Same here. I like Sam Altman but if the board removed him for a good reason and he was reinstated because the employees want payouts, humanity could be in big trouble.
I actually like the chaoticness, because I don’t like having one small group of people as the self-appointed and de-facto gatekeepers of AI for everyone else. This makes it clear to everyone why it’s important to control your own AI resources.
Accelerationism is human sacrifice. It only works if it does damage… and most of the time, it only does damage.
Not wanting a small group of self-appointed gatekeepers is not the same as accelerationism.
… the goal is not what makes it acceleration.
“Accelerationism” is a philosophical position. The goal is entirely what makes it accelerationism. Quit swapping words in each new comment.
For fuck’s sake. You want bad things to happen… so good things happen, later. Bad shit happening is the part that’s objectionable. Saying ‘but I want good things’ isn’t fucking relevant to why someone’s hassling you about this!
The bad shit you want to happen first is the only part that’s real!
You want bad things to happen
No, that’s entirely you assuming things about my position. I don’t want bad things to happen.
I guess the entire workforce calling the board incompetent twats and threatening to quit was actually effective.
Sounds like they got together and forced their hand. Wonder if there’s a term for that?
Maybe some type of group or team. Or union. Nah that will never stick
I guess this will have to do as entertainment until GRRM finishes his damn book.
Any day now! I have a friend that got hyped up every time George published another chapter from WoW, but I just refuse to read any of them. I want a complete book. I’m not sure he’s got any idea of how to finish his own story.
I didn’t know he wrote for World of Warcraft
I know you’re joking, but it stands for Winds of Winter if anyone is confused.
He’ll never finish it.
Yeah that’s my feeling as well
It’s okay, though, we’ll have an AI that can do it soon enough.
On the one hand, the board was an insane cult of effective altruism / longtermism / LessWrong, so fuck them. But on the other hand, this was a worker revolt for the capitalists, which I guess shouldn’t be surprising since tech workers famously lack class consciousness.
People are asking what is wrong with these cults. It’s a lot to cover so I won’t try. People who follow the podcasts Tech Won’t Save Us or This Machine Kills will already be familiar with them. Here’s an article relevant to the moment that talks about them a little: Pivot to AI: Replacing Sam Altman with a very small shell script
Genuinely confused by your first statement (in particular effective altruism). What does that have to do with the board?
Not an attack, just actually clueless.
Several of the [former] board members are affiliated with the movement. EA is concerned with existential risk, AI being perceived as a big one. OpenAI’s nonprofit was founded with the intent to perform research AI safely, and those members of the board still reflected that interest.
famously lack class consciousness
How much money do you suppose the average OpenAI employee makes? What class do you imagine they’re part of?
I’m sure the developers make the lower half of six figures, but they still have to sell their labor to survive, so they’re still working class.
I’ve been an SF Bay Area software developer for almost thirty years, so I know them well. I consider us members of the professional–managerial class (PMC). We generally think we’re “above” the working class (we’re not), and so we seldom have any sense of solidarity with the rest of the working class (or even each other), and we think unionization is for those other people and not us.
When Hillary Clinton talked about the “basket of deplorables,” she was talking to her PMC donors & voters about the rest of the working class, and we eat that shit up. Most of my peers have still learned no lessons from her election defeat, preferring to blame debunked RussiaGate conspiracy theories.
That’s what happens when the wealth is shared with those who make it. Everyone becomes a capitalist.
Nah. It’s more like the pusher man. Give them their first taste for free, and they’ll be a customer for life.
Man what a clusterfuck. Things still don’t really add up based on public info. I’m sure this will be the end of any real attempts at safeguards, but with the board acting the way it did, I don’t know that there would’ve been even without him returning. You know the board fucked up hard when some SV tech bro looks like the good guy.
I mean, the non-profit board appears, at current glance, to have fired the CEO for their paranoid-delusional beliefs, that this LLM is somehow a real AGI and we are already at a point of a thinking, learning, AI.
Just delusional grandeur on behalf of the board, or they didn’t and don’t understand what is really going on, which might be why they fired the CEO: for not informing the board, truly, what level OpenAI’s AI is actually at. So the board was trying to reign in a beast that is merely a puppy, with information that was wrong.
Where are you getting this information?
As I used the word “appears”, I am postulating based on how the company is controlled, the non-profit entity, as well as certain statements that board members have made in the past such as Ilya Sutskever (now ex-board??), whose thoughts have likely been influenced by his mentor Geoffrey Hinton who is quoted on 60 Minutes saying the AI is about to be “more intelligent than us”. Ilya is known for, beyond his scientific endeavors into AI and his position of Chief Scientist of OpenAI, some odd behavior on his commitment to AI safety though I’m sure his beliefs come from the right place.
There’s a lot more to this, for each board member and Sam, but it makes me believe that a large wall was erected in information leading to a paranoid board.
Really? I thought it was because he supposedly raped his younger sister.
Could be, but words on Twitter and no lawsuit don’t really equal getting ejected from your CEO position. Imagine if CEOs got ejected for stuff akin to that, there’d be no CEOs left.
Excuse me what
https://mastodon.social/@MattHodges/111456744670095882
me, at the thanksgiving table tomorrow
What about his deal with Microsoft?
What about it?
deleted by creator
And the lord is back in his fiefdom
Because 95% of the people that worked for him demanded it.
Then he’s a popular lord
So what’s the problem?
That you don’t see the problem
Explain, then. “It should be obvious” is not an explanation.
The fact that the employees were able represent their defacto power in a crisis is good, but the fact that the don’t have explicit power in the decision making process is why this able to happen in the first place.
There are no good kings, even if the best men were made kings, they would be inherently tainted by the position.
The fact that the employees were able represent their defacto power in a crisis is good
That’s all that I’m saying.
If you’ve got issues with the whole concept of hierarchical power structures or there being such a thing as “leaders”, that’s a bit beyond the scope of this particular situation.
Heck you could even keep the hierarchy, but with no representation of the workers in leadership you lose an major perspective on the organization.
Fucking Kendall Roy on the OpenAi board or something
So where’s all the folks coming out of the woodwork to tell us this isn’t Technology news, then? They sure want to shit all over the comments whenever Musk is the subject, but here, in this nearly identical situation? Crickets, naturally. I’ve heard no other single piece of news out of this instance for five days other than the personal schedule of Sam Altman. It was good to hear about what happened once. Now we’re on post 63 of the same news.
Don’t get me wrong, I dislike Elongated Muskrat as much as the next guy. But there’s an extremely vocal minority here that love to invade the comments on every post of anything he’s done to cry about how that isn’t technology news. I generally like to argue that yes, it is technology news that Twitter has refactored how their verification mark works, or that advertisers are pulling out due to offensively alt-right content being promoted by Muskrat. I also think this situation with Altman is legitimate technology news, I just like to point out hypocrisy when I see it.
Why is Elon Musk in this comment? He’s not Technology news. Get this content out of here!
Is that what you want? But seriously the only time I see complaining is when it’s not actual tech news, just some random ass tweet he put out.
A wild Elon Musk rant has appeared, complaining about how people are complaining about how Elon Musk is irrelevant, in a thread that has nothing to do with Elon Musk.
I really have no idea how to take this.
people should write in their diaries more often
I see your point but this is completely different. Altman is not on the front page of every news site every day like Elon is, so I’m not sick of looking at his face like I am with Elon.
Also, being fired as CEO of one of the fastest growing (and according to many) one of the most important companies in the world, and then being hired back 3 days later is a pretty big deal and is worthy of my attention. If there are a handful of articles about it, I’m okay with that, at least for now.
News articles about Elon’s constant political clown shows aren’t technology-related just because he’s in charge of a few tech companies.
News articles about a CEO being fired from a tech company and then almost immediately rehired are tech-related, because they’re about the tech company itself and the relevant actions of the people involved.
If this were a story about the opinions of Sam Altman, who happens to be a CEO of a tech company, about world hunger or something, that would be comparable. But it’s an article about how a CEO, who happens to be Sam Altman, was fired and rehired from a tech company over the course of 3 days.
There are still obviously personalities and opinions involved, but they’re in the context of technology, rather than technology being tangentially related to the context of someone’s opinions.
I maintain that this had something to do with a disagreement over which commercial applications are permissible for GPT-4, and that Sam Altman somewhere along the line negotiated a deal that allowed some actor to participate in one of the “forbidden applications” by proxy via a seemingly unrelated agreement. I’m talking Financial Forecasting (High Frequency Trading), Military, and Policing/Surveillance. Now that Sam’s back and unfettered, I’m guessing we are going to see some of those applications come out into the light.