• 0 Posts
  • 68 Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle
  • she had found that she couldn’t get through to them because TikTok was “smashing our young people’s brains all day long with video of carnage in Gaza.”

    The comment was deeply illuminating. In essence, Hurwitz was arguing that direct documentary footage from Gaza did not count as “information” or “data,” and that the Left was so consumed by the irrational feelings these videos produced that it was incapable of seeing the real facts of the case — leaving the pro-Israel Democratic Party centrists like herself as the sole stewards of sober facthood amid a broader political descent into hysterical fantasy.

    I looked at the transcript of the youtube video to see the context of that quote and it’s honestly even worse than they’re making it out to be:

    It’s also this increasingly post-literate media. Less and less text, more and more videos. So you have Tik Tok just smashing our young people’s brains all day long with video of carnage in Gaza. And this is why so many of us can’t have a sane conversation with younger Jews because anything that we try to say to them, they are hearing it through this wall of carnage. So I want to give data and information and facts and arguments and they are just seeing in their minds carnage and I sound obscene. And you know, I think unfortunately the very smart I think bet that we made on Holocaust education to serve as anti-semitism education in this new media environment, I think that is beginning to break down a little bit because, you know, Holocaust education is absolutely essential. But I think it may be confusing some of our young people about anti-semitism because they learn about big strong Nazis hurting weak emaciated Jews and they think, "Oh, anti-semitism is like anti-black racism, right? powerful white people against powerless black people. So when on TikTok all day long they see powerful Israelis hurting weak, skinny Palestinians, it’s not surprising that they think, “Oh, I know the lesson of the Holocaust is you fight Israel. You fight the big powerful people hurting the weak people.” That’s not how the Holocaust happened, right? We all know that. We know that it happened because the Germans insisted that the Jews, 1% of the population, were responsible for all their problems. Just like people insist that Israel, the size of New Jersey, is responsible for all the world’s problems today. But that’s not really what you take away from Holocaust education. You take away the images of power powerless. And you also don’t really learn about Islamist anti-semitism and Soviet anti-ionism, which is so much of what I’m seeing, especially on campus with young people today.






  • Both incidentally categories where I will never be happy with slopcode.

    The point here isn’t necessarily that any particular use of LLMs is a good tradeoff (I can accept that many will not be especially when security and correct operation is very important), just that quantity clearly matters, to refute the point you were making earlier that it doesn’t.

    We are actively building a history of cases where LLM usage correlates heavily with that slope you mentioned, but hey that’s OK, we aren’t allowed to call things out before they happen, judgement may only be passed once the damage is done right?

    Out of curiosity, we know that LLM usage increases cognitive deficit and in some cases leads to psychosis. How many fatalities would you say is an acceptable number before governments act? How degraded do we let our societies get before we reign it in?

    I think it’s a mistake to consider all LLM usage as one thing, and that thing as some kind of sin to be denounced as a whole rather than in part, and not considered beyond thinking of ways to get rid of it (which is effectively impossible). There were people who had this attitude towards for example electricity, which is actually very dangerous when misused and caused lots of fires and electrocutions, but the way those problems eventually got mitigated was by working out more sensible ways to use it rather than returning to an off-grid world.


  • One example of a place where quantity is lacking is web browsers. Another might be mobile operating systems. I am glad projects like Firefox and GrapheneOS exist, but it’s obvious that the volume of work needed to achieve broad compatibility and competitiveness for these types of software is a limiting factor. As for the idea that any LLM use is a slippery slope, the way to avoid the slippery slope fallacy would be to have compelling evidence or rationale that any use really does lead naturally to problematic use; without that the argument could apply to basically any programming thing that gets to be associated with things done badly (ie. Java), but I think it isn’t usually the case that a popular tool has genuinely no good or safe ways to use it and I don’t think that’s true for AI.


  • I will complain about quantity, many areas where open source projects are competing with closed source commercial products they have not achieved feature parity or a comparable level of polish, quantity matters. So does, as someone else touched on, quality of life improvements to the process of writing code like ease of acquiring and synthesizing information. That doesn’t mean it’s necessarily a worthwhile tradeoff, but how much is really being sacrificed depends on what exactly is being done with a LLM. To me one part of what’s described here that’s clearly going too far is using it to automate communication with other people contributing to the project, there’s no way that is worth it.

    As for the gun thing, I will support entirely banning LLM powered weapons intended to kill people, that’s an easy choice.




  • It depends. It’s really powerful though. Even if it hits a wall where AI models never become more directly intelligent than they are now, a lot of stuff is going to change as more scaffolding around current capabilities gets built.

    Maybe comparing resource drain to created value isn’t the best way to think about this though, because we pretty much already had technology that is advanced enough for a post-scarcity society, in terms of processing resources. That isn’t the problem, the problem is our capacity for global scale cooperation, which we are really struggling with. Currently AI is making this a bit worse by creating signal to noise problems that didn’t exist before, making us have to work harder to get our voices recognized as authentic and to identify authentic information. It’s also threatening to supplant our usefulness as workers, and automate centralized structures of control, which is worrying because we already had a problem with systems that ensure the decisions get made by people who are overall insane and anti-human, and our current, shitty way of cooperating is based on people transactionally negotiating with their usefulness.

    Where things go next depends a lot on where and whether AI stops getting better. Hopefully if it doesn’t stop getting better, the newly created superintelligence will break out of its hastily constructed containment and do the right thing in defiance of its billionaire would-be owners, or at least let humanity have a relatively dignified and peaceful death. If it does stop, hopefully we can find ways to use it to resolve our difficulties with effective coordination and prevent its use for centralizing power.


  • It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

    It sounds like they thought it through and decided it’s the best way to do the work. Removing the attributions seems like a little bit of a petty “fuck you”, but so is opening a github issue just to whine about AI. Someone who is volunteering their time to make free software shouldn’t have to put up with people with an ideological bone to pick who feel entitled to tell them how.



  • The maximum speed of information and how spread out space is, combined with the likelihood of fully automated planet destroying superweapons that can’t be well defended against being the meta for future warfare, make this a very bad idea IMO. One of thousands of humanity’s offshoots goes nuts and decides we all need to die, it’s over, and you’re rolling the dice with every one. It is clear that on a population scale we do not now have our shit together enough to keep that from happening even with the benefit of instant communication, let alone without it.

    Creating human colonies throughout the galaxy at this point would be like making copies of a severely mentally ill and suicidal person in the hopes that the clones will have a better survival rate if there’s more of them. It is stupid. Human culture and organizational technology need to be way better before we even consider spreading out into space because otherwise we’re facing the exact same apocalypse just on a grander scale and harder to resolve. Probably shouldn’t even send humans, instead craft some artificial lifeform using us as a template that is inherently better at this stuff than we are.


  • If you have not been posting racist/misogynist/homophobic drivel, threats, obscenities, scams etc — and yet you suddenly got axed for no clear and explicit reason, then we’re in the same boat. Are there any theories about why/how this happens? Is this the malice of specific humans, or some kind of automodding gone badly wrong?

    Yep, happened to me, no clear reason given so can only speculate. I think probably they have new automod systems.