Resisting AIAn Anti-fascist Approach to Artificial Intelligence by Dan McQuillan is our next book! Reading discussions will take place primarily through video meetups. This forum is simply an organizing space for the reading group.
Each meeting will take place from 7pm BST - 8pm BST on the 4th Thursday of the month. Here’s the tentative (it can change based on popular consent) reading schedule:
It’s possible that Dan’s role in the group may evolve with time. They simply wish to spectate and see what effect their book has on readers.
In any case, I think they might benefit from learning about this group and, on that basis, try to further their (and, to an extent, our) mission through knowledge of it.
Be excellent to each other is the guiding principle of Noisebridge. Wikipedia uses a somewhat similar rule, which they call “the fundamental rule of all social spaces. Every other policy for getting along is a special case of it.” Unlike Wikipedia, Noisebridge takes a positive approach, and avoids the practice of officially enumerating the myriad potential special cases; “be excellent” is enough.
We make official Noisebridge decisions by consensus, which means the willing consent of all of our members. Decisions are typically made at our weekly meetings or via our online management venues, and items proposed for consensus are announced at least a week in advance to give everyone time to hear about them. Conceivably, members could block by proxy if they are unable to attend or if they wish to block anonymously.
Doing excellent stuff at Noisebridge does not require permission or an official consensus decision. If you’re uncertain about the excellence of something you want to do, you should ask someone else what they think.
Also see founding ideas of cryptoparty. Thanks for your invatation / permission to not need permission. That is how I roll. Consensus / temp check is nice though, thanks for your indication you might like that Dan is part of this process somehow…
So Dan offered this up anyhow off the back of my Mastodon post, so looks like a post-final-chapter wrap up AMA/Q&A is on the cards. To this end I think it would be good for us to bundle questions we ask across the course of this book into a document, then we can bring back up the ones which it would be great to get from source
Image above reads:
Hi Josh, great to see that the group will be reading Resisting AI. If it would be of interest, I’d be very happy to do a Q&A/AMA with the group after you’re done.
Good session all, here are the notes I took ahead of this in case they are of use. We shall select a chair for the next session in the coming week or two, if you’d like to nominate yourself then that would be just fine:
At what level is everyone coming into this? Why do you care about this subject, what do you want to learn?
In terms of ideology, we can refer to a widely used, if somewhat condensed, summary of fascism that describes it as ‘palingenetic ultranationalism’ (Grifn, 1993). These two words distill the ideology into features that are constant over time, and helps us to avoid getting diverted into looking for exact repeats of fascist rhetoric from the 1930s
Palingenetic Ultranationalism, conceptually is this something agreed with, familiar?
The network can’t tell us why a particular pattern in any layer is signifcant: it delivers a prediction, not an explanation. So, while neural networks can extract predictions from messy input data with uncanny effectiveness, they paradoxically cast a long shadow over our chances of understanding any trade-offs they make in the process.
It seems to me as if in the past there has been an informational asymmetry i.e. Facebook which has benefitted the problematic Big Tech company, but with AI we’ve found a place where even the people who seek to benefit financially from the system don’t really understand what is happening inside the box. Agree?
One of the latest language models at the time of writing, called GPT-3, has 175 billion weights that need to be optimized. Training its cousin, the BERT algorithm, which is used for natural language inference, has the same carbon emissions as a trans-American flight, while using a method called ‘neural architecture search’ to optimize the hyperparameters of a similar model produces the same carbon emissions as fve cars over their entire lifetimes (Strubell et al, 2019)
When it comes to resistance, many things need to be brought into the fold; this being said (and I’m aware that Dan here is highlighting rather than suggesting this as a route) do you think that environmental impact would be a good angle?
I have heard some claim that scientists lack a conceptual understanding of how neural networks work. These same people then claim that it’s dangerous to apply neural networks to very sensitive decision problems, reasoning that “if we don’t understand how the machine made this life-changing decision, then we cannot reasonably rely on the machine’s decision.” What exactly is the conceptual understanding that we lack? How would one explain that lack of understanding to an 8 year old human?
Thanks for the recommendations of the Tech Won’t Save Us episodes, I started the Emily Bender one and am really enjoying. The books on facism I recommend are White Skin, Black Fuel by the Zetkin Collecitve which McQuillan does cite in the book and Post-Internet Far Right, Fascism in the Age of the Internet by 12 Rules For What published by Dog Section Press.
@all we’re looking to see if it’s possible to move the RG by one week back to the 31st, if that is gonna make it difficult for anyone then please let me know
Hey @all, a decision was made to move it a bit next time, so we are meeting via the normal link on the 18th of October, a Wednesday, at the normal time
Hi all, I realise I can no longer do this coming Wednesday. I did say I’d lead the session. I can rearrange but I remember there not being many options so please do the session without me.