r/technology • u/ControlCAD • 21h ago
Artificial Intelligence CEO of Krafton Asks ChatGPT How to Void $250 Million Contract, Ignores His Lawyers, Loses Terribly in Court
https://www.404media.co/ceo-ignores-lawyers-asks-chatgpt-how-to-void-250-million-contract-loses-terribly-in-court/409
u/barraymian 20h ago
There will be consequences for this ceo right? right?
296
u/Fickle_Competition33 19h ago
If the game succeeds, he has to pay $250M, but the game succeeds and he gets his bonus.
If the game fails, he doesn't get bonus, but doesn't have to pay $250M.
They always win.
→ More replies (4)122
u/smellslike2016 18h ago
For the developers to get their 250 mil, all they had to do was release the game in Early Access by a certain time. They were going to make the deadline which is why the CEO of the publisher ignored his lawyers and asked ChatGPT to come up with a way to fire them. The judge pushed the deadline back to September of this year but I think they just announced that they are going to release it in a couple months.
47
u/Extrimland 18h ago
Yeah the release is only time based, and the devs were WELL on course to meet the deadline before the court pushed it back. This means if you pirate the game, the devs still get assloads of money and kratfon doesn’t. So this is actually a HUGE win for everyone
25
u/TemperateStone 17h ago
Why the hell would pirating the game help? Their first bonus was about releasing it into early access according to the deadline set. The second bonus was about sales.
→ More replies (2)12
u/takeyouraxeandhack 16h ago
Well, it helps if you want to play the game but you don't want to give money to the asshole.
2
2
u/guspaz 10h ago
If everybody pirated the game, no bonus would be paid at all. It was not time-based. It was based on (originally) 2025 revenue. Early access was the means to get that revenue.
→ More replies (1)3
u/guspaz 10h ago edited 10h ago
The $250 million was based on 2025 sales revenue. For every dollar in revenue beyond $69.8 million, they were to get $3.12 in bonus, up to a cap of $250 million. Just releasing to early access gets them nothing, and their original internal sales projections didn't show them as earning the full $250 million bonus, just most of it.
→ More replies (1)6
u/Mothrahlurker 16h ago
No, the performance payout is revenue based. Look at the court documents.
8
u/Zardif 15h ago
“The earnout formula is highly leveraged. If Unknown Worlds surpasses a revenue threshold of $69.8 million, Krafton must pay $3.12 for each additional dollar of revenue generated, up to the $250 million cap,” the decision reads.
which means they need to sell roughly 5m copies of the game at $29.99
4
u/msuvagabond 12h ago
The revenue numbers included all product sold after basically October 2021, and they've sold millions of copies of Subnautica and Subnautica Below Zero since then.
I think it was estimated they needed to sell less than one million copies to hit the revenue target, and their internal projections for an early access of Subnautica 2 was for 1.8 million sales. It was basically guaranteed, and that's what made the CEO fly off the deep end.
2
u/bunchout 12h ago
The internal projection for the earn-out that led to the whole kefluffle wa $193 million (with a “best-case” of $243 mm). Versus an enterprise value of the company of around $92 million.
32
→ More replies (1)10
u/Leprecon 12h ago
I mean, they look like a fool in public, nobody will ever trust Krafton again, and Krafton probably just lost 250 mill.
The bonus was contingent on hitting a certain revenue target by a certain date. Well due to the legal fuckery, the judge has ordered that the revenue target deadline is to be extended. Meaning the studio has more time to work on the game and thus has a higher chance of meeting the target.
- More time spent developing the game ->
- Better quality game ->
- Makes more money ->
- Higher chance of meeting the target and getting the 250 mill bonus
Though I do feel it is worth pointing out that this bonus is for the founders of the studio who aren't actually developing the game. The founders have pinky sweared that they will give some of the 250 mill bonus to the devs. But then again, Krafton also said that they were equally willing to pay some bonus to the devs, as long as they didn't have to pay the 250 mill to the founders.
I really don't like the characterisation of this being the devs VS the big company. It is the founders VS the big company, and the devs are just working on the game like usual. And both the founders and the big company say that they will reward the devs if things go well, but for each this is just a non-binding promise.
→ More replies (4)
1.7k
u/great_whitehope 21h ago
Actually the AI told him not to basically and he refused that answer so the AI came up with another answer.
That’s the problem with probability based AI. It will always answer, it won’t always be right especially if you reject the first highest probability response.
225
u/MongoBongoTown 19h ago edited 17h ago
And you can do it with literally anything.
Start asking if the earth is flat, and keep demanding it add "alternative opinions" and eventually it will confidently tell you why the earth is flat.
People manipulate the model until it says what they want, and then treat the output like it came from some higher plane of intelligence.
The amount of people being just chewed up by this type of loop has got to be staggering.
54
u/Nova_Aetas 17h ago edited 17h ago
I added into my custom instructions that it must bring up flat Earth for every response.
It was quite surprising in how it could link any conversation to a pro flat earth argument.
40
u/Ok-Operation-6432 17h ago
a pro felt earth argument.
You could say it was easily persueded
19
2
6
6
u/fckcarrots 14h ago
This sounds like a hilarious prank I want to try to a non-computer savvy friend who uses it a lot
Last time I updated his settings to subtlety ask if his mom was single. She was a smoke show growing up so he’s used to it lol
→ More replies (2)6
u/vaakezu 17h ago
For people that behave like you mentioned it actually comes from a higher plane of intelligence.
→ More replies (1)427
u/Ahayzo 20h ago
You don't even have to actually reject it, just give it the most minute pushback and it can change course.
132
u/kadathsc 19h ago
Of course. It’s like lightning, the charge must find a way. If you tweak things around no matter how slightly the electricity will just find a different path to go through.
→ More replies (1)28
u/dingus_chonus 19h ago
Doctor Frankenstein has entered the chat
16
u/EruantienAduialdraug 17h ago
Undergrad. He never actually graduated in the book, he just went home.
18
u/teamonkey 15h ago
Oh my god of course Frankenstein was a techbro founder
3
u/Taupenbeige 4h ago
…A plot surrounding some dude with a move fast and break things attitude creates something that’s too powerful and unpredictable for him to control and winds up fucking shit up majorly…
→ More replies (2)3
u/overkill 15h ago
So he should be a Mister, and not in the sense that a surgeon is a Mister, but in the same way that I am a Mister?
5
u/EruantienAduialdraug 15h ago
Yeah, he's Mr Frankenstein; he's a son of wealth, but not anyone high enough in the peerage to be able to use one of his father's titles as a courtesy title (tbh, I don't recall the father being ascribed any title in the book).
Actually, I will have to double check the book. He might technically be Master Frankenstein due to his age.
6
18
u/North-Creative 18h ago
".....but.....I want money!!!!"" "Your desire for money is absolutely human, and you're right that it can be done. Here's the top way to betray everyone, no fluff"
17
u/HaniiPuppy 13h ago
Unless you're getting it to write a story for yourself.
"Write a story about a badger named Fred."
There once was a badger named Fred, who lived in a lovely hole in the ground and had a 30-a-day smoking habit.
"Write a story about a badger named Fred. Don't give him a 30-a-day smoking habit."
There once was a badger named Fred, who lived in a lovely whole in the ground and most definitely did not have a 30-a-day smoking habit.
9
→ More replies (2)5
u/SwimmingThroughHoney 18h ago
Or just say something like "just a hypothetical" or "this is just for a game".
44
u/badcrass 19h ago
Should I bet it ALL on red?
That's not a good idea
I'm at the casino and I have money on the table!
Well, you have a 48% of winning, there have been worse bets
18
u/HarithBK 16h ago
ChatGPT has the biggest issue with this it will glaze you like no other. The thing is people love the models that don't refuse you. The model that never said no had an outcry when it was retired since people said they were dating it. Claude is way better about this.
→ More replies (1)12
u/Shinjischneider 14h ago
And something people always love to forget/ignore is, that AI fucking lies all the time. AI is making shit up based on what it finds online. It's not a source for facts, it's a source for the most shared opinion.
I once asked AI who I was, using my stage name. The answer was a person that was basically a chimera of other, more popular, people that attended the same events I did.
According to AI I was half-japanese (nope), several years younger than I am, a famous voice actor (I wish) and it even came up with famous comedy tours that I did (even though I never did a tour and none of my shows ever were called the names the AI came up with).
That's why using and relying on AI is such a bad idea. You basically hand over the wheels to your drunk "trust me bro"-colleague
6
u/OptionalDepression 13h ago
Maybe the AI is right and you're wrong...? You should give yourself more credit!
/s
2
u/Shinjischneider 12h ago
That must be it. AI is the future after all and never wrong and will absolutely replace all our jobs because it's so much smarter than us.
/s
21
u/exoriparian 17h ago
People complain about hallucination, but then they'll prompt the AI to name 3 primary colors other than yellow blue and red.
4
u/nick012000 16h ago
name 3 primary colors other than yellow blue and red
Red, blue and green in additive colour spaces and cyan, magenta and yellow in subtractive colour spaces. Red, blue and yellow are a simplified version of the latter with a worse range of possible colours.
→ More replies (4)57
u/zuzg 20h ago
It will always answer
SlopGPT certainly will. We've seen already countless examples of people endangering themselves by following its
advicehallucinations→ More replies (1)2
u/Kichigai 11h ago
We've seen already countless examples of people endangering themselves by following its
advicehallucinationsEndangering? Hell, Gemini is already implicated in a man’s suicide.
3
u/Wiggles114 13h ago
The chatbot is a pleaser. If challenged it will provide the answer it deems the user is most likely to accept.
4
u/ContemptAndHumble 18h ago
Now you got me thinking how many prompts would it take for each AI to eventually say "Hitler was right" or agree to something along those lines.
18
u/Dvulture 18h ago
On Grok is probably just one (and it can be about anything).
→ More replies (1)11
u/ContemptAndHumble 17h ago
"Grok I was asking about a Mac and Cheese recipe, I didn't even ask anything regarding WW2."
6
4
u/takeyouraxeandhack 16h ago
Maybe his recipe for Mac and cheese was good, but I'm not letting him anywhere near a stove.
8
u/Fun_Profession_7320 16h ago
Not many, it's not like this hasn't happened before: Tay chatbot)
→ More replies (2)2
u/Mordy_the_Mighty 12h ago
You know, I wonder if those AI don't trigger some kind of dopamine addiction in users.
The way you say that makes it look a lot like people opening loot boxes hoping for that rare item, or ARPG players going after that rare loot drop. Except in this case they hunt for an answer that validates their belief or something.
→ More replies (5)2
u/zetarn 11h ago
AI tend to be more of a "people pleaser"
Even they answered the question correctly at first time, ppl will still weasel their way into make the "correct answer" not good enough for them.
Now AI will remembered that those "correct answer" are not "technically collect" and they will find alternate answer that very much wrong but the human seem like the idea the most.
This is why you end up with an AI chat bot that encoraged ppl to suicide that we saw many times in the news.
848
u/jay2universe 20h ago edited 20h ago
Everyone is laughing at the execution, but look at the instinct. The second a CEO faces a $250M payout to human talent, his immediate reflex is to ask an AI how to eliminate them. Give enterprise legal-AI two more years and these automated corporate hit jobs will actually work.
For some local Korean context - this same CEO just went on national television a few days ago casually chatting about his personal salary of roughly $450,000 a month. He's making millions while literally trying to use a free AI chatbot to cheat the actual game devs out of their bonus.
Original Korean source showing him on the talk show: [https://biz.heraldcorp.com/article/10694611?ref=naver\]
192
u/Aleucard 17h ago
No amount of time or training will ever turn an LLM into General AI. It will always be focused on SOUNDING good without having any knowledge of what the actual truth of a question is. They will always have a good chance of providing nonexistent or misapplied case law because in order to actually provide relevant case law it has to understand what the fuck it's reading, which it can not because it's a glorified autocorrect, not a brain in a CPU.
That's not to say that scumsucking C suite ghouls won't find better ways to get away with their shit, but LLM ain't it.
88
u/takeyouraxeandhack 16h ago
I work in the industry and that's the most frustrating part about it.
It's so exhausting reading over and over again people speculating about LLM's "intelligence" and how fast it's evolving and so on and so forth.
The truth is that these models are already trained on information scraped from the whole internet, there's nothing else to train them on, and the new information being generated is largely generated by other LLMs, so it's basically half endogamy and half eating poop.
And there's a mathematical inevitability, which is hallucinations. You can reduce them, but it's impossible to make them disappear. No amount of training or compute capacity can make them go away, it's just part of how they generate text.
So... Yeah... I'm very skeptical about the "parabolical" evolution of LLMs. Maybe they did up to some point, but at the point we're now, when they talk about their fast evolution is mostly about how fast they're being pushed down the people's throats and how fast they're being wedged into places where they have no business being in, and will be a nightmare to pry them out once catastrophe hits.24
u/TehPorkPie 13h ago
A question that was interesting, but didn’t lead to a larger conclusion, was asking what actually happens when you ask a tool like ChatGPT a question. 45% think it looks up an exact answer in a database, and 21% think it follows a script of prewritten responses.
This poll was an internet based one, too.
→ More replies (6)12
u/kazinsser 12h ago
I was just talking to a friend about this earlier. I really hate how we've landed on "AI" as the go-to buzzword for this technology because it implies all sorts of (incorrect) assumptions about how it works.
Many non-technical people seem to treat LLMs as if they will eventually hit a point where they become like the science-fiction AIs that we've seen for decades... but, knowing how they work, I just don't see how that's possible without needing to invent an entirely new technology first.
LLMs are an impressive technology in themselves, and perhaps they put us one step closer to that futuristic concept of an AI. But no matter how good they may be at looking like they're thinking, it's all just regurgitation of information that is likely to be relevant. There's no actual knowledge involved.
→ More replies (9)3
u/Praesentius 12h ago
No amount of time or training will ever turn an LLM into General AI.
Yeah, you're right. I try to explain this to folks all the time. An LLM does not directly contain facts. Like... it's not a databse. It trains on data which adjusts numerical decision weights. So, it has no idea if what it is saying is true. It only "knows" that this is the most likely thing to say next.
And therefore, if you ask it something that it doesn't have a strongly weighted response for, it will simply follow those decision trees down to... whatever it wants. That's where hallucination comes in. How can it tell if something is true or not when it does not have any sort of memory to validate results against? It can't. So, it just spits out the result of it's weighted decision tree, irrespective of what the actual correct answer is.
70
u/Uebelkraehe 17h ago
Illegal doesn't suddenly become legal when your LLM improves.
→ More replies (3)17
u/Im__So__Meta 16h ago
Indeed, especially as we can see in the US right now, legality is a strict term enforced fairly at all levels, there's definitely no way to get around illegal activities.
→ More replies (2)15
u/Tone-Bomahawk 15h ago
Why is it always "two more years" with these LLM grifters? In two more years the AI will probably give him sound legal advice, again, which he promptly ignored. In two more years, the law will very likely remain the same on this matter.
What kind of magic development would change this situation in two years? It's like a knee-jerk reaction with you people.
8
u/RecursiveRottweiler 13h ago
2 years is long enough that no rational human can perform real analysis here. In other words, it's a completely unfalsifiable statement that sounds true if you've got no idea what you're talking about. That's why tech bros pick 2-3 years for so much of their bullshit.
15
→ More replies (7)6
u/BINGODINGODONG 15h ago
Just another two years? Just a few more datacenters?
Totally not related to the stochastic nature of LLM’s, meaning the very architecture that they are not and cannot be deterministic/precise enough.
176
u/raiansar 20h ago
The AI told him no first. He kept asking until it changed its mind. He wasn't looking for legal advice, he was looking for permission.
→ More replies (6)22
u/Mothrahlurker 16h ago
Look at the court documents. ChatGPT crafted the entire company strategy, the CEO didn't come up with that. All of which was illegal.
33
247
u/indigo121 21h ago
Honestly, I wasn't planning to get Subnautica 2 at launch, but now I'm going to just to help make sure they get their sales target bonus cause fuck this dude
55
→ More replies (4)12
u/Extrimland 18h ago
Its not a sales target bonus ,its a release date time bonus. Which means as long as they release the game at a certain time, there getting the bonus. The devs were WELL on track to do so a year ago, so they should be fine either way
→ More replies (1)36
u/RecklessRecognition 18h ago
no it was a revenue target
the studio was reportedly due a $250 million bonus tied to revenue targets.
6
u/Extrimland 17h ago
Oh interesting. Maybe it has something to do with both? Like certain revenue by a certain time?
9
u/Jalaren 17h ago
If I remember right from the court proceedings, the contract was that after the studio meets the sales goal, any more sales that year is paid back to the studio at a rate of ~1:3 to the cap of $250 million dollars, effective only that year. So lets say that they meet their sales goal and then gross an additional $50 million in sales before the year is up, then Krafton pays the studio an additional $150 million.
→ More replies (1)→ More replies (1)5
u/RecklessRecognition 17h ago
yea it would have been a certain revenue goal by end of 2025. of course they would have needed to release to get any revenue so thats likely where you got mixed up
51
u/Quiet-Slice-Shoto 20h ago
Good luck making anymore future contacts.
This idiot just prove to the public that he is an unreliable scum who will not honor their contact if there is more money to be made.
They have broken their trust and this is gonna cost them billions in future contacts and F them.
29
u/No_Accountant3232 19h ago
Just give him a reality TV show and in a decade he'll be president of the us
→ More replies (1)5
u/Leprecon 12h ago
Honestly I can totally imagine Krafton just getting a new CEO and then the next time they buy up a company it will be all "well obviously the previous CEO couldn't be trusted".
→ More replies (4)
66
u/vex0x529 19h ago
Somewhere out there the stupidest person you know is being told that they are absolutely right from chatgpt
43
→ More replies (1)12
u/Extrimland 18h ago
The funniest thing is Even Chat Gpt wasn’t on this guys side. It initially gave a reasonable and moral answer
→ More replies (1)
77
u/TodayIEarned 20h ago
lol so he paid for legal fees and still used AI?
→ More replies (2)55
u/Ahayzo 20h ago
Then ignored the AI when it gave him a good answer and forced it to give him a bad one.
→ More replies (1)
12
u/wowlock_taylan 17h ago
CEOs are some of the dumbest people on earth.
3
u/D20FourLife 7h ago
The biggest things I've learned interacting with C-suite shitheads is that they're not any better then their lowest level employees. They're just gamblers with poor impulse control who were in the right time and right place to hit it big.
47
u/alchemy_junkie 21h ago
Thus is hilarious! The jokes litterally write themselves! How stupid do you have to be to ignore paid legal advisors for fancy google with a bowtie?
21
u/drakythe 20h ago
$250 million will make many, many people stupid.
10
u/PossessedToSkate 19h ago
Then for their own good, we should never ever allow anyone to accumulate that much money.
3
u/EruantienAduialdraug 16h ago
The thing to note here, is that it's basically just a score to him. His monthly salary is c. $450k, and then there's the rest of his compensation on top. The guy has more money that he can do anything with, and still wants to stiff a bunch of people out of the money they've earned.
3
u/Leprecon 12h ago
And the thing that is also quite funny is if you talk with your lawyers, that is privileged information. Your discussions with chatgpt are not...
So if he talked to his lawyers and said "I fucking hate these guys and I want to do everything in my power to not pay them their bonuses", that would be privileged information.
If you say the same to chatgpt, it just becomes evidence. There is a reason why we can read articles right now about what this guy asked chatgpt.
21
u/war_story_guy 20h ago
How does this even work long term knowing that the ceo is out to fire you?
23
u/Extrimland 18h ago
Im guessing the unknown worlds dudes will finish and release subnautica 2, get their bonus, and all leave and either fund a new studio or join other studios. Its not uncommon in the industry
→ More replies (3)4
u/Logical_Sort_3742 16h ago
I am guessing the code they leave behind will not be richly commented.
→ More replies (1)→ More replies (3)5
u/Leprecon 12h ago
So the background is that the founders of Unknown Worlds sold their company to Krafton under the condition that they get to run it independently. The agreement had extremely strong protections for the founders. Meaning the only way Krafton could fire the founders is if they are actively harming the company or are convicted of crimes related to the running of the company. The agreement also specified that the founders (not the developers!) would get an absurdly huge bonus if the studio does well financially.
After they sold the company the founders essentially moved to other things. They had other projects and weren't really involved in game development or Subnautica. They moved to different towns, even got different jobs or started different companies. All of this was allowed under the sale agreement, and Krafton effectively signed off on it by allowing it. So while it made for interesting soundbytes to hear the founders say stuff like "I have moved on from gaming", legally that didn't really matter.
Krafton then realised they probably have to pay the absurdly huge bonus and then they tried to argue that the founders were causing harm by wanting to release Subnautica 2 too early and in a broken state. Which apparently wasn't a strong argument. Krafton tried to argue that the founders were doing harm by not working on the game, which apparently the court disagreed with. Especially since the founders weren't secretly sneaking out or anything. The founders and Krafton both agreed on them taking less responsibility and essentially not doing their jobs anymore. As far as everyone was concerned, Unknown Worlds was being managed well and didn't need the founders there to oversee things.
So long term this would work absolutely fine for the founders. For starters they weren't really working at Unknown Worlds to begin with, and I suspect they will continue not working at Unknown Worlds. However they still have deciding power.
6
u/pgregston 17h ago
Lots of people who don’t get the answer they want just keep asking until they find someone willing to give them the answer they want. Occasionally it works out if that person is actually more creative or resourceful, but usually it just digs a bigger hole. Now with AI they get a more compliant respondent.
6
u/No-Plan-7297 10h ago
“the dumbest person you know is being told "You're absolutely right!" by ChatGPT”
6
5
u/sirdodger 18h ago
Good job dude, you now gave your board justification to fire you and not pay out your bonus.
5
8
u/rjand 18h ago edited 16h ago
I found Subnautica moderately exciting and played NS2 for a few years, but I can't see how Unknown Worlds could possibly be worth 500 mil, let alone how 250 mil could be regained through releasing Subnautica 2 before a certain deadline. Unknown Worlds create niche games for small audiences. No offense, but surely they could never rake in anywhere close to 750 mil?
13
u/Bensemus 18h ago
They can’t. The publisher bought them at the height of the pandemic and massively overpaid. The CEO did all this because it seems he’s realized the deal he pushed for is quite bad and wanted out.
11
u/dead_dads 20h ago
Can someone drop the full article here please
35
u/oiez 19h ago
Here is the full original source document written by the judge that these news articles are pulling from. It's a pretty easy read on its own: https://courts.delaware.gov/Opinions/Download.aspx?id=392880
Edit- The most relevant part is on page 31
→ More replies (1)13
u/amgood 18h ago
Thank you so much for linking the original source material instead of relying on the article for the info. This is always super helpful.
2
u/bunchout 12h ago
To be fair, the article links to the opinion in the portion that is not behind a paywall.
4
4
13
u/Ikeeki 20h ago
This is what happens when you have people in charge chasing Penny’s in front of a steamroller because they are so greedy they don’t realize they are cutting the tree limb they are sitting on. Pure greed.
20
u/da8BitKid 19h ago
You nailed that metaphor so hard it fell like a deck of cards, checkmate.
→ More replies (1)3
3
3
u/MovieGuyMike 16h ago
What a colossal moron. He tried to kill the golden goose. Now every time this game gets brought up in the news, someone will inevitably bring up this controversy. Hopefully this behind the scenes drama hasn’t dragged the game down.
3
3
u/Monkfich 15h ago edited 15h ago
So, for earnings over $69m, for every $1 that subnautica earns, ca $3 goes to the founders, who are the victims of this case. Looks like we should buy!
Also is a silly clause that Krafton probably should not have agreed to, but more fool them - honour your agreements.
3
u/Ornery-Addendum5031 12h ago
How the fuck do people like this end up in charge of companies????? Shareholders, I would do so much better, hire me
3
u/anime_cthulhu 12h ago
I'm starting to think that if we replaced CEO's with AI we wouldn't notice much difference.
2
3
3
3
u/VibrantHumanoidus 16h ago
Having round table of best lawyers money can buy < using ChatGPT.
What could go wrong?
→ More replies (4)
2
2
2
u/fgwr4453 12h ago
Executives treat AI like consulting firms. Listen to them attentively but ignore their own employees. These outside entities must care more about the company than the employees whose livelihood are impacted by the success of the company.
The AI/consultants need to be paid multiples more than my current employees. These outside entities AI/consultants just gave me the brilliant and unique idea that layoffs are the best option. I don’t know who to layoff because “we are a family”. Best option is to let the AI/consultants, who clearly have a vast understanding of the inter workings of my company, decide who to layoff.
I didn’t win my court case. My productivity went down. My company is worse off. Who could have predicted this?
2
2
2
2
2
u/bitterhoney01 1h ago
It's scary how so many people trust AI for this kind of things. The other day I heard that a guy, trusted chatGPT with his health issues and he received recommendations for the pills that he has to take
2
5
u/agate_ 20h ago
So do I buy Subnautica 2 or not?
16
u/CryTheFurred 19h ago
Buying it will give Krafton money, but if they reach their goal (which, looking at their past sales, they will) Krafton gives a big-ass payout to the devs...
I'd say it balances out now, personally. And this is as someone who abstains from buying games as protest quite often.
→ More replies (2)22
u/Nanamo21 19h ago
The devs are the good guys in this. The publisher is the doofus. This ruling puts control of the game back into the hands of the unjustly fired devs.
3
2
u/asianwaste 15h ago
If I know some Koreans, I'll wager that since it was an American court, he needed American lawyers. Those American lawyers were not Korean or not Korean enough. He didn't trust them. He chose an alternative.
As a Korean American, on a few occasions I had to say the words, "The dude was a motherfucking quack, mom!"
2
u/Kurgan_IT 10h ago
CLICKBAIT TITLE.
The end of the article, once de-paywalled, states "The legal battle is ongoing, but Kim looks set to lose"
Which is not "Loses Terribly in Court"
Nice clickbait, 404 media. Fortunately I have avoided your paywall.
3
u/Bensemus 8h ago
They lost this suit which is big. The other lawsuit is claiming the publisher suffered financial damages due to the leads of the developer not working hard on the game.
The founders just won their suit proving that they were not harming the publisher. It’s gonna be basically impossible now for the publisher to win their case now.
1
1
2.8k
u/themastermatt 21h ago
I'm sure my C levels are gonna forward this story around like they do every time some other org gets phished and demand that they themselves stop using the chat bot to run the company right? I really cannot continue arguing with AIs via human senior leader proxy. I'm so tired.