Anyway, I’ll watch the twitch stream from across the pond.
My question is how far does it go - are the gains going to peter out, or does it keep going or even accelerate? Seems like one of the latter two thus far.
I feel like this comes about because it's the optimal strategy for doing robust one-shot "point fixes", but it comes at the cost of long-term codebase heath.
I have noticed this bias towards lots of duplication eventually creates a kind of "ai code soup" that you can only really "fix" or keep working on with AI from that point out.
With the right guidance and hints you can get it to refactor and generalise - and it does it well - but the default style definitely trends to "slop" in my experience so far.
I would guess the same way humans do.
Put brain in creative mode, bang out something that works
Put brain in rules compliance mode and tidy everything up.
Then send for code review.
However, I would be interested in establishing a union for technologists across the nation. Drive quality from the bottom up, form local chapters, collectively bargain.
I would expect this conf to expand on those types of concepts and strategies.
It seems to be socially associated with the Handmade Hero and Jon Blow Jai crowd, which is not so much concerned that their software might be buggy as that it might be lame. They're more concerned about user experience and efficiency than they are about correctness.
This is not at _all_ my interpretation of Casey and JBlow's views. How did you arrive at this conclusion?
> They're more concerned about user experience and efficiency than they are about correctness.
They're definitely very concerned about efficiency, but user experience? Are you referring to DevX? They definitely don't prize any kind of UX above correctness.
Processes, tools, and diligence vigilantly seem the most apparent path. Perhaps rehash the 50 year old debate of professionalization while AI vibes coding is barking at the door, because what could possibly go wrong with even less experience doing the same thing and expecting a different result.
IMHO this group's canonical lament was expressed by Mike Acton in his "Data-Oriented Design and C++" talk, where he asks: "...Then why does it take Word 2 seconds to start up?!"[0]. See also Muratori's bug reports which seem similar[1].
I think it is important to note, as the parent comment alludes, that these performance problems are real problems, but they are usually not correctness problems (for the counterpoint, see certain real time systems). To listen to Blow, who is actually developing a new programming language, it seems his issue with C++ is mostly about how it slows down his development speed, that is -- C++ compilers aren't fast enough, not the "correctness" of his software [2].
Blow has framed these same performance problems as problems in software "quality", but this term seems share the same misunderstanding as "correctness". And therefore seems to me like another equivocation.
Software quality, to me, is dependent on the domain. Blow, et. al, never discuss this fact. Their argument is more like -- what if all programmers were like John Carmack and Michael Abrash? Instead of recognizing software is an economic activity and certain marginal performance gains are often left on the table, because most programmers can't be John Carmack and Michael Abrash all the time.
[0]: https://www.youtube.com/watch?v=rX0ItVEVjHc [1]: https://github.com/microsoft/terminal/issues/10362 [2]: https://www.youtube.com/watch?v=ZkdpLSXUXHY
I am going to keep saying this, if your main tagline/ethos is broken by your website you have failed.
* On mobile the topics are hidden without scroll over. You also can't read multiple of the topics without scrolling right as you read.
* The background is very distracting and disrupts readability.
* None of your speakers have links to their socials/what they are known for.
* > Who are the organizers? Sam, Sander and Charlie.
* * Ah yes, my favourite people.... At least hyperlink their socials.
The QE engineers and the development engineers were in entirely separate branches of the org chart. They had different incentive structures. The interface documentation was the source of truth.
The release cadence was slow. QE had absolute authority to stop a release. QE wrote more code than development engineers did with their tests and test automation.
> In a charming small town
But between the sparse website, invite-only and anonymous organizers, it just feels like it's emphasizing the reactionary vibes around the Handmade/casey/jblow sphere. Like they don't want a bunch of blue-haired antifa web developers to show up and ruin everything.
Glad to see they got Sweden's own Eskil Steenberg though. Tuning in for that at least.
At least for Casey his case is less that everyone should be Carmack or Abrash but that programmers often through their poor design choices prematurely pessimise their code when they don’t need too.
And stability is important, but not critical - and the main way they want to achieve it is that errors should be very obvious so that they can be caught easily in manual testing. So C++ style UB is not great, since you may not always catch it, but crashing on reading a null pointer is great, since you'll easily see it during testing. Also, performance concerns trump correctness - paying a performance cost to get some safety (e.g. using array bounds access enforcement) is lazy design, why would you write out of bounds accesses in the first place?
Sofwtare development and quality assurance should be tightly integrated and should work together on ensuring a good product. Passing builds over a wall of documentation is a recipe for disasters, not good quality software.
https://handmade.network/blog/p/8989-separating_from_handmad...
https://handmadecities.com/news/splitting-from-handmade-netw...
I sometimes wonder if there could be an optimal number of microservices. As far as I know no one has connected issue data to the number of microservices before. Maybe there‘s an optimal number like „8“ which leads to lower number of bugs and faster resolution times.
I don't think we'll reach this promised land™ until incentives re-align. Treating software as an assembly line was obviously The Wrong Thing judging by the results - problem is how can we ever move to a model that rewards quality perhaps similar to (book) authors and royalties?
Owner-operator SaaS is about as close as you can get but limits you to web and web-adjacent.
There's a reason web developers, and the ecosystem/community around them, are the butt of many jokes. I don't think it's at all surprising that the injection of identity politics into the software industry has had a negative effect on quality.
Get couple shredded guys and gals to show off how fit they are so everyone feels guilty they are snacking past 8PM.
Sell another batch of “how to do pushups” followed by “how to do pushups vol.2” with “pushup pro this time even better”.
Where in the end normal people are not getting paid for getting shredded, they get paid for doing their stuff.
I just constantly feel like I am not a proper dev because I mostly skip unit tests - but on the other hand I built last 15 years couple of systems that worked and were bringing in value.
However, you should want to build quality software because building quality things is fulfilling. Unfortunately certain systems have made the worship of money the end all be all of human experience.
Quality is a measurement. That’s how it works in hardware land, anyway. Product defects - and, crucially, their associated cost to the company - are quantified
Quality is not some abstract, feel good concept like “developer experience”. It’s a real, hard number of how much money the company loses to product defects.
Almost every professional software developer I’ve ever met is completely and vehemently opposed to any part of their workflow being quantified. It’s dismissed as “micromanagement” and “bean counting”.
Bruh. You can’t talk about quality with any seriousness while simultaneously refusing metrics. Those two points are antithetical to one another.
Obviously, this assumes you write enterprise grade code. YMMV
They did TDD for a long time, they wrote Clean Code™, they organised meetups, sponsored and went to conferences, they paid 8th Light consultants to come teach (this was actually worth it!) and sent people to Agile workshops and certificates.
At first, I was like "wow, I am in heaven".
About a year later, I noticed so much repetition and waste of time in the processes.
Code was at a point where we had a "usecase" that calls a "repository" that fetches a list of "ItemNetworkResponse" which then gets mapped into "Item" using "ItemNetworkResponseToItemMapper" and tests were written for every possible thing and path.
They had enterprise clients, were charging them nicely, paying developers nicely and pocketed extra money due to "safety buffers" added by both engineers, managers and sales people, basically doubling the length of any project for "safety".
The company kept to their "high dev standards" which meant spending way more time, and thus costing way more, than generic cookie-cutter agencies would cost for the same project.
This was great until every client wanted to save money.
The company shut down last year.
lol, fire business analysts and let tech writers do their job. Sounds like some kind of VC black company.
1. It is partly because the typical metrics used for software development in big corporations (e.g., test coverage, cyclomatic complexity, etc) are such a snake oil. They are constantly misused and/or misinterpreted by management and because of that cause developers a lot of frustration.
2. Some developers see their craft as a form of art, or at least an activity for "expressing themselves" in an almost literary way. You can laugh at this, but I think it is a very humane way of thinking. We want to feel a deeper meaning and purpose in what we do. Antirez of redis fame have expressed something like this. [0]
3. Many of these programmers are working with games and graphics and they have a very distinct metric: FPS.
Quality is not a "real, hard number" because such a thing would depend entirely on how you collect the data, what you count as data, and how you interpret the data. All of this is brimming with controversy, as you might know if you had read more than zero books about qualitative research, epistemology, the philosophy, history, or practice of science. I say "might" because of course, the number of books one reads is no measure of wisdom. It is one indicator of an interest to learn, though.
It would be nice if you had learned, in your years on Earth, that you can't talk about quality with any seriousness while simultaneously refusing to accept that quality is about people, relationships, and feelings. It's about risks and interpretations of risk.
Now, here is the part where I agree with you: quality is assessed, not measured. But that assessment is based on evidence, and one kind of evidence is stuff that can be usefully measured.
While there is no such thing as a "qualitometer," we should not be automatically opposed to measuring things that may help us and not hurt us.
That's a pretty broad claim. This conference could be in response to a perceived negative effect on quality, but claiming that as a fact seems hard to back up to me