With the end of the Strange Loop series of conferences, I decided to watch some talks from a number of years and write down here some notes. The conference looks interesting and not as mainstream as others.
As a starting point, I'll follow some random suggestions found on the internet: a blog post of a friend of mine (a bit unrefined but it has some tips) and this other best of list and follow leads to other talks mentioned. I'll update this article as I watch more. This won't be a personal "best of", I'll also include talks that didn't resonate with me a lot.
-
Simple Made Easy by Rich Hickey (2011): I already wrote a short comment about it. Definitively suggested.
-
An Approach to Computing and Sustainability Inspired From Permaculture, by 100s Rabbits (2023): a talk about advocating sustainable computing. Sustainable explained in terms of: hardware and software not forced into obsolescence, that you can use in 30 years because designed to be serviceable (or easily reproduced). As a retro-computing enthusiast, this topic is of great interest for me but the talk delivery was sort of confusing. I've felt like it was a patchwork of ideas (all valid per se) cobbled together with tape. A bit more "glue" while jumping between concepts would have helped. It was hard for me to follow along and look for the single message that the speaker wanted me to take home. I've been left with some unanswered questions, such as:
- Ok, modern computing is not sustainable. The speaker shows a concept of an assembly language in the works. Which problem does this new tool solve? Can I use it for X or Y? Is it meant to be a theoretical exercise?
- Speaker's old Thinkpad VGA output could not be connected, someone lent them a laptop to show the slides. The audience giggles. Does it say something about permacomputing supporters? Is the speaker suggesting to bury the head in the sand and stop the calendar at a year of choice before progress became "problematic", like some retro-computing enthusiasts advocate?
At the current stage, I see permacomputing more akin a performative / pretentious / geeky concept than having a concrete proposal. Me too, I like to think radical but also being pragmatic. It looks more like a bunch of "Luddites" want to make computing more human (great!), durable (100% agree!) but their proposals are more like Let's use this 40 year old CPU and a fistful of transistors to do 8 bits doodles. There must be a middleground. I think that middleground could be in the area of frame.work or MNT Reform laptops.
A few of these permacomputing folks seem to be at the intersection with other weirdos on the "post-apocalyptic" spectrum, people fantasizing about tiny operating systems 1, planning underground bunkers, stocking up on guns and leveling up their survival skills (but that's for another story).
Following the speaker's lead, I've also watched the 2 following talks.
-
Point-Free or Die: Tacit Programming in Haskell and Beyond by Amar Shah (2016): kind of a theoretical talk about ideas for functional programming. Not my cup of tea. Could not find an answer to the question about "what would I want to bring home from this talk?".
-
Programming Should Eat Itself by Nada Amin (2014): another programming language theoretical talk, nothing I could bring home. I mean, great mind, but far too abstract.
-
Risks and Opportunities of AI in Incident Management by Emily Arnott (2023): A talk about how LLMs (Large Language Models) can help in case of incidents in production (example: suddenly most of your users cannot login anymore). I am still skeptical about this latest craze in technology and I think this talk shows why. The speaker advocates using ChatGPT to write code to fix a bug or to identify the source of the problem describing it using natural language. Then they add (quote):
Do you know exactly how [code produced by the LLM] works? Do you know line by line what it's doing? Maybe in simple cases yes, but the more you rely on AIs, the more you take advantage of its speed, the less you'll be able to have that confidence in every single line of code. And that's OK! Because you probably don't have that level of confidence of every single line of human written code in your codebase.This, in my opinion, is a false and dangerous characterization. If you don't trust code written by a human, you go and try to talk with that human or (if not available) you may have documentation. Failing all that, you can be sure that someone, at some point, wrote some code by reasoning about a context. An LLM pulls out stuff based on statistic models, so there's a chance that the code is formally correct but at the same time wrong in that context.
The speaker concedes the point that LLMs are always confident even when they are bullshitting you (true, I've experienced that, too). A human has different degrees of confidence and this is useful to put immediately flags on possible gotchas. Their solution? Deliver the code if it seems to fix the urgent issue at hand and plan for a code review later. That's nonsense.
The same processes the speaker advocates as safety nets to keep in check hallucinating LLMs look like the same when you don't use LLMs: code reviews, tests and all that jazz that we know are good practices.
Other points in the talk sounds sketchy or just plain wrong as well. The fact that the speaker talks so casually of these bad practices makes me think I wouldn't want to work there (or be their customer).
-
Playing with Engineering by AnnMarie Thomas (2023): I think this is a talk about a woman being (more or less openly) discriminated by academia but making her way teaching at the intersection of performative art and engineering. Definitively a good delivery and a story about creativity, but probably I didn't find it very captivating, maybe I am not the target group.
-
How to Build a Meaningful Career by Taylor Poindexter and Scott Hanselman (2023): an entertaining talk delivered by two speakers about finding meaning when building a career. It's a talk about having different perspectives and expectations about your career and not feeling ashamed about that. I was a bit put off by the lack of real content in this talk.
I find this talk a tad too "American", I felt an intent to spread good vibes but sentences like "luck = opportunity + being prepared" or "in your career making sure you're doing something meaningful can be highly personal - and that's ok!" really made my eyes roll.
I don't understand how these kind of blanket statements, vague suggestions can actually help people think and reason about their own situation. Or maybe I'm just biased by a mindset more similar to I work and get paid, period. Life is elsewhere.
Oh, and also, if you need insuline shots, just don't drink sugary sodas, FFS.
-
Comedy Writing With Small Generative Models by Jamie Brew (2023): interesting and very entertaining. Explains how the core of a language model is made, shows some funny applications. Apart from that he makes a good point that everything that's been published can be used in ways we didn't imagine ("you are always on the record") because data on the Web is scraped to build training datasets. He trained his small language models using datasets created from scraping relatively small quantity of data. It was probably out of scope here, but a talk on this topic could also try to answer questions such as 1) what is the workflow for creating and refining large training datasets and in what conditions this work is carried on (PDF, 74Kib) and 2) what about the licensing of some source material used to build the datasets.
-
The mess we're in by Joe Armstrong (2014): The speaker (deceased in 2019) is one of the authors of the Erlang programming language. For this fact only I think he deserves to be listened. And to some extent it's also an entertaining talk! It could be easily misinterpreted as the classic "old man shouting at the clouds" and to be fair the speaker goes on with some strange theoretical napkin calculations which seem disconnected from the main topic to illustrate what in the end is his main point: is the ever increasing complexity of our systems a sustainable computing platform for the future?
We need to actively participate in this future. We are not just passengers on a bus driven by someone else, it's not "just a job". We are at the wheel and we are responsible for the direction this bus is heading. Our choices have consequences.
In addition, we like to call ourselves "software engineers". But if mechanical or civil engineers were allowed the same level of sloppiness and lack of polish in their jobs we are allowed to (thanks to dozen of excuses at hand to shrug the blame off of our shoulders), I'd probably choose to live in a hut in a forest far from any technology built by human engineers.
-
The hard parts of Open Source by Evan Czaplicki (2018): a retrospective from a longtime Open Source developer about what it means to live and work in this space. He focuses on the idiosyncrasies of human interactions in a community that a successful Open Source project can create around itself. The off-putting entitlement of some people, the polarization of the online discourse are factors taking the motivation away. The author tries to explain the origins of these peculiar behaviours going back to the beginning of the computer era, tries to connect the dots between seemingly concepts far from each other and the big picture he builds makes sense. In the end what he says is nothing new and completely relatable; the tracing he does is elaborated and in my opinion a solid and reasonable justification ("people are like these because of these long history and for these big reasons") is perhaps a bit too much putting under the microscope. I see the topic in simpler terms, I don't see the need for a lot of analyzing when is comes to online assholes. In any case, a pleasant talk with a good delivery.
-
The Early Days of id Software: Programming Principles by John Romero (2022): one thing must be conceded to John Romero, he is a great speaker. In this highly entertaining talk he recalls the story of id Software, from the the very beginning to the mid-Nineties (the Quake era). The story-telling is a bit of an heroic fistful of ultra-nerds laser-focused on delivering great products. From time to time he interrupts with a slide with something they learned about how to develop software. These learnings, while certainly reasonable, I'm not sure they would apply today as well; the world has changed from the Nineties and even for small group of "bedroom coders" just being really good and pulling out a great game is not enough. There is a lot more work around it and in the narration this work is not mentioned at all. by his own admission there were no managers, just a producer waiting for the game to be finished. Also the whole talk is about how hard they worked apparently without problems that couldn't be solved by throwing more work at it. I am looking at this story with eyes of many years of development and a very different experience so it feels a bit cherry-picked. Anyway, a talk to be taken for what it is, pure entertainment but also inspirational about games that were (and still are) highly regarded.
Unironically linking the Waybak Machine because at the time of writing the website is down.