

Honestly Iām surprised that AI slop doesnāt already fall into that category, but I guess as a community weāre definitionally on the farthest fringes of AI skepticism.
Honestly Iām surprised that AI slop doesnāt already fall into that category, but I guess as a community weāre definitionally on the farthest fringes of AI skepticism.
I feel like this response is still falling for the trick on some level. Of course itās going to āact contriteā and talk about how it āpanickedā because it was trained on human conversations and while that no doubt included a lot of Supernatural fanfic the reinforcement learning process is going to focus on the patterns of a helpful asistant rather than a barely-caged demon. Thatās the role itās trying to play and the work itās cribbing the script from includes a whole lot of shitposts about solving problems with ārm -rf /ā
Copy/pasting a post I made in the DSP driver subreddit that I might expand over at morewrite because itās a case study in how machine learning algorithms can create massive problems even when they actually work pretty well.
Itās a machine learning system, not an actual human boss. The system is set up to try and find the breaking point, where if you finish your route on time it assumes you can handle a little bit more and if you donāt it backs off.
The real problem is that everything else in the organization is set up so that finishing your routes on time is a minimum standard while the algorithm that creates the routes is designed to make doing so just barely possible. Because itās not fully individualized, this means that doing things like skipping breaks and waiving your lunch (which the system doesnāt appear to recognize as options) effectively push the edge of what the system thinks is possible out a full extra hour, and then the rest of the organization (including the decision-makers about who gets to keep their job) turn that edge into the standard. And thatās how you end up where we are now, where actually taking your legally-protected breaks is at best a luxury for top performers or people who get an easy route for the day, rather than a fundamental part of keeping everyone doing the job sane and healthy.
Part of that organizational problem is also in the DSP setup itself, since it allows Amazon to avoid taking responsibility or accountability for those decisions. All they have to do is make sure their instructions to the DSP donāt explicitly call for anything illegal and they get to deflect all criticism (or LNI inquiries) away from themselves and towards the individual DSP, and if anyone becomes too much of a problem they can pretend to address it by cutting that DSP.
Iām not gonna advocate for it to happen but Iām pretty sure the world would be overall in a much healthier place geopolitically if someone actually started yeeting missiles into major American cities and landmarks. Itās too easy to not really understand the human impact of even a successful precision strike when the last times you were meaningfully on the other end of the airstrike were ~20 and ~80 years ago, respectively.
Someone didnāt get the memo about nVidiaās stock price, and how is Jensen supposed to sign more boobs if suddenly his customers all get missileād?
You know, I hadnāt actually connected the dots before, but the dust speck argument is basically yet another ostensibly-secular reformulation of Pascalās wager. Only instead of Heaven being infinitely good if you convert thereās some infinitely bad thing that happens if you donāt do whatever Eliezer asks of you.
The big shift in per-action cost is what always seems to be missing from the conversation. Like, in a lot of my experience the per-request cost is basically negligible compared to the overhead of running the service in general. With LLMs not only do we see massive increases in overhead costs due to the training process necessary to build a usable model, each request that gets sent has a higher cost. This changes the scaling logic in ways that donāt appear to be getting priced in or planned for in discussions of the glorious AI technocapital future
While I also fully expect the conclusion to check out, itās also worth acknowledging that the actual goal for these systems isnāt to supplement skilled developers who can operate effectively without them, itās to replace those developers either with the LLM tools themselves or with cheaper and worse developers who rely on the LLM tools more.
I think itās a better way of framing things than the TESCREALs themselves use, but it still falls into the same kind of science fiction bucket imo. Like, the technology theyāre playing with is nowhere near close to the level of full brain emulation or mind-machine interface or whatever that you would need to make the philosophical concerns even relevant. I fully agree with what Torres is saying here, but he doesnāt mention that the whole affair is less about building the Torment Nexus and more about deflecting criticism away from the real and demonstrable costs and harms of the way AI systems are being deployed today.
Is that Pat Rothfuss in the picture?
Iām not comfortable saying that consciousness and subjectivity canāt in principle be created in a computer, but I think one element of what this whole debate exposes is that we have basically no idea what actions makes consciousness happen or how to define and identify that happening. Chatbots have always challenged the Turing test because they showcase how much we tend to project consciousness into anything that vaguely looks like it (interesting parallel to ancient mythologies explaining the whole world through stories about magic people). The current state of the art still fails at basic coherence over shockingly small amounts of time and complexity, and even when it holds together it shows a complete lack of context and comprehension. Itās clear that complete-the-sentence style pattern recognition and reproduction can be done impressively well in a computer and that it can get you farther than I would have thought in language processing, at least imitatively. But itās equally clear that thereās something more there and just scaling up your pattern-maximizer isnāt going to replicate it.
In conjunction with his comments about making it antiwoke by modifying the input data rather then relying on a system prompt after filling it with everything, itās hard not to view this as part of an attempt to ideologically monitor these tutors to make sure theyāre not going to select against versions of the model that arenāt in the desired range of ācloseted Nazi scumbag.ā
āWe made it more truth-seeking, as determined by our boss, the fascist megalomaniac.ā
Total fucking Devin move if you ask me.
Just throw the whole unit into the font, just to be safe. Or better yet, a river!
Also the attempt to actually measure productivity instead of just saying āthey felt like it helpedā - of course it did!
Nah, we just need to make sure they properly baptise whatever servers itās running on.
Compare a $2,400/yr subscription with the average annual software developerās salary of ~$125,000/yr.
I mean, decontextualizing and obscuring the meanings of statements in order to permit conduct that would in ordinary circumstances breach basic ethical principles is arguably the primary purpose of deploying the specific forms and features that comprise āBusiness Englishā - if anything, the fact that LLM models are similarly prone to ignore their āconscienceā and follow orders when deciding and understanding them requires enough mental resources to exhaust them is an argument in favor of the anthropomorphic view.
Or:
Shit, isnāt the whole point of Business Bro language to make evil shit sound less evil?
I feel like the greatest harm that the NYT does with these stories is not
inflictingallowing the knowledge of just how weird and pathetic these people are to be part of the story. Like, even if you do actually think that this nothingburger āaffirmative actionā angle somehow matters, the fact that the people making this information available and pushing this narrative are either conservative pundits or sad internet nazis who stopped maturing at age 15 is important context.