• 8 Posts
  • 43 Comments
Joined 1 year ago
cake
Cake day: February 2nd, 2024

help-circle





  • Starting this off with Baldur Bjarnason sneering at his fellow techies for their ā€œreadingā€ of Dante’s Inferno:

    Reading through my feed reader and seeing tech dilettantes ā€œdoingā€ Dante in a week and change, I’m reminded of the time in university when we spent half a semester discussing Dante’s Divine Comedy, followed by tracing it’s impact and influence over the centuries

    I don’t think these assholes even bother to read their footnotes, and their writing all sounds like it comes from ChatGPT. Naturally so, because I believe them when they claim they don’t use it for writing. They’re just genuinely that dull

    At least read the footnotes FFS

    If they were reading Dante for pleasure, that’d be different—genuinely awesome, even. But all of this is framed as doing the entirety of ā€œhumanitiesā€ in the space of a few weeks.








  • New article from Axos: Publishers facing existential threat from AI, Cloudflare CEO says

    Baldur Bjarnason has given his commentary:

    Honestly, if search engine traffic is over, it might be time for blogs and blog software to begin to deny all robots by default

    Anyways, personal sidenote/prediction: I suspect the Internet Archive’s gonna have a much harder time archiving blogs/websites going forward.

    Up until this point, the Archive enjoyed easy access to large swathes of the 'Net - site owners had no real incentive to block new crawlers by default, but the prospect of getting onto search results gave them a strong incentive to actively welcome search engine robots, safe in the knowledge that they’d respect robots.txt and keep their server load to a minimum.

    Thanks to the AI bubble and the AI crawlers its unleashed upon the 'Net, that has changed significantly.

    Now, allowing crawlers by default risks AI scraper bots descending upon your website and stealing everything that isn’t nailed down, overloading your servers and attacking FOSS work in the process. And you can forget about reigning them in with robots.txt - they’ll just ignore it and steal anyways, they’ll lie about who they are, they’ll spam new scrapers when you block the old ones, they’ll threaten to exclude you from search results, they’ll try every dirty trick they can because these fucks feel entitled to steal your work and fundamentally do not respect you as a person.

    Add in the fact that the main upside of allowing crawlers (turning up in search results) has been completely undermined by those very same AI corps, as ā€œAI summariesā€ (like Google’s) steal your traffic through stealing your work, and blocking all robots by default becomes the rational decision to make.

    This all kinda goes without saying, but this change in Internet culture all-but guarantees the Archive gets caught in the crossfire, crippling its efforts to preserve the web as site owners and bloggers alike treat any and all scrapers as guilty (of AI fuckery) until proven innocent, and the web becomes less open as a whole as people protect themselves from the AI robber barons.

    On a wider front, I expect this will cripple any future attempts at making new search engines, too. In addition to AI making it piss-easy to spam search systems with SEO slop, any new start-ups in web search will struggle with quality websites blocking their crawlers by default, whilst slop and garbage will actively welcome their crawlers, leading to your search results inevitably being dogshit and nobody wanting to use your search engine.








  • Idk personally i kind of expect the ai makers to have at least had the sense to allow their bots to process math with a calculator and not guesswork. That seems like, an absurdly low bar both for testing the thing as a user as well as a feature to think of.

    You forget a few major differences between us and AI makers.

    We know that these chatbots are low-quality stochastic parrots capable only of producing signal shaped noise. The AI makers believe their chatbots are omniscient godlike beings capable of solving all of humanity’s problems with enough resources.

    The AI makers believe that imitating intelligence via guessing the next word is equivalent to being genuinely intelligent in a particular field. We know that a stochastic parrot is not intelligent, and is incapable of intelligence.

    AI makers believe creativity is achieved through stealing terabytes upon terabytes of other people’s work and lazily mashing it together. We know creativity is based not in lazily mashing things together, but in taking existing work and using our uniquely human abilities to transform them into completely new works.

    We recognise the field of Artificial Intelligence as a pseudoscience. The AI makers are full believers in that pseudoscience.