Am I wrong here? Like, look, shame me. I work in machine learning and have since 2012. I don’t do any of the llm shit. I do things like predicting wildfire risk from satellite imagery or biomass in the amazon, soil carbon, shit like that.
I’ve tried all the code assistants. They’re fucking crap. There’s no building an economy around these things. You’ll just get dogshit. There’s no building institutions around these things.
I think it’s supposed to work like, “well, even if you are right about the massive utility of AI, is that still what we should be aiming for?”
It gets around the combative “you’re wrong, AI is garbage” argument. The people hoisting AI because they believe, even if it does suck, it’ll get better… those people can probably understand this argument much more easily.
If you want a demo on how bad these AI coding agents are, build a medium-sized script with one, something with a parse -> process -> output flow that isn’t trivial. Let it do the debug, too (like tell it the error message or the unwanted behaviour).
You’ll probably get the desired output if you’re using one of the good models.
Now ask it to review the code or optimize it.
If it was a good coding AI, this step shouldn’t involve much, as it would have been applying the same reasoning during the code writing process.
But in my experience, this isn’t what happens. For a review, it has a lot of notes. It can also find and implement optimizations. The weighs are the same, the only difference is that the context of the prompt has changed from “write code” to “optimize code”, which affects the correlations involved. There is no “write optimal code” because it’s trained on everything and the kitchen sink, so you’ll get correlations from good code, newbie coders, lesson examples of bad ways to do things (especially if it’s presented in a “discovery” format where a prof intended to talk about why this slide is bad but didn’t include that on the slide itself).
It’s funny. I see the phrase “AI doomsday scenario” and I immediately picture devastating cascading consequences caused by someone mistakenly putting too much trust in some kind of agentic AI that does things poorly and breaks a lot of big important things.
I’m just not seeing a scenario where AI causes devastating disruption based on its own ultra competence. I’m much more scared of AI incompetence.
Well for one, that area already burned pretty recently. So its pretty unlikely to burn again any time soon.
But as part of a larger picture:
The area does experience fire-weather conditions for some portion of the year:
Here we’re looking at HDWI (hot dry windy index), where a “loose” definition of fire weather is if HDWI is above 200. HDWI is based on a few factors, namely, how hot it is, how dry it is, and how fast the air is moving. Hot dry air moving quickly = fire weather.
The number of fire weather days per year has been increasing, and in very recent years (the past decade) the rate of change has increased, and become statistically signficant:
So its not a particularly fire prone area, but its getting worse, and its getting worse at a faster rate.
That would be the first part of the analysis I would run. After that, we’d look for historically “anomalous” periods. Its not enough to look at averages; that will wash over important features in the data. We need to look for specific periods where fire weather manifests.
This is another way of thinking about fire risk. Here we’re going to count the amount of time, after 12 hours, that an area is in sustained fire-weather conditions. Basically, a bit of time in bad conditions isn’t the end of the world, but as you stay in fire weather conditions, fire risk increases exponentially (as plants/ fuels continue to dry out).
If I were writing an insurance product for you, I would count the number of events in a given magnitude bucket and give you a risk rating. Here, licking my thumb and sticking it in the air, I would say… “not that bad”.
Much of my work is around modeling in the wilderness urban interface. You picked an almost all wilderness area. Since there are no structures, I cant do the next analysis, but it would looks something like this:
Most of my work is about figuring out what the impacts of wildfire on the built environment are going to be. Also, the free structure dataset I have access to doesn’t cover Canada and I’m not going to spend money buying the structures for you (unless you REALLY want me to).
Those first figures are all specific to the coordinates you provided. The final figure is just an example.
I just…
Am I wrong here? Like, look, shame me. I work in machine learning and have since 2012. I don’t do any of the llm shit. I do things like predicting wildfire risk from satellite imagery or biomass in the amazon, soil carbon, shit like that.
I’ve tried all the code assistants. They’re fucking crap. There’s no building an economy around these things. You’ll just get dogshit. There’s no building institutions around these things.
Heh, that’s the joke going around now.
AI works, it replaces workers, we lose our jobs.
AI doesn’t work, bubble pops, we lose our jobs.
You are right in every serious part of the world.
But add “venture capital” to the equation and it works out stronger than anything else so far.
I think it’s supposed to work like, “well, even if you are right about the massive utility of AI, is that still what we should be aiming for?”
It gets around the combative “you’re wrong, AI is garbage” argument. The people hoisting AI because they believe, even if it does suck, it’ll get better… those people can probably understand this argument much more easily.
It sucks and its at the point now where were hitting diminishing returns so I’m not sire if it sill get better
If you want a demo on how bad these AI coding agents are, build a medium-sized script with one, something with a parse -> process -> output flow that isn’t trivial. Let it do the debug, too (like tell it the error message or the unwanted behaviour).
You’ll probably get the desired output if you’re using one of the good models.
Now ask it to review the code or optimize it.
If it was a good coding AI, this step shouldn’t involve much, as it would have been applying the same reasoning during the code writing process.
But in my experience, this isn’t what happens. For a review, it has a lot of notes. It can also find and implement optimizations. The weighs are the same, the only difference is that the context of the prompt has changed from “write code” to “optimize code”, which affects the correlations involved. There is no “write optimal code” because it’s trained on everything and the kitchen sink, so you’ll get correlations from good code, newbie coders, lesson examples of bad ways to do things (especially if it’s presented in a “discovery” format where a prof intended to talk about why this slide is bad but didn’t include that on the slide itself).
It’s funny. I see the phrase “AI doomsday scenario” and I immediately picture devastating cascading consequences caused by someone mistakenly putting too much trust in some kind of agentic AI that does things poorly and breaks a lot of big important things.
I’m just not seeing a scenario where AI causes devastating disruption based on its own ultra competence. I’m much more scared of AI incompetence.
Your job sounds really cool! How likely is Alberta to be on fire again this year?
Gimme some coordinates.
57.4228475, -113.8340952
Well for one, that area already burned pretty recently. So its pretty unlikely to burn again any time soon.
But as part of a larger picture:
The area does experience fire-weather conditions for some portion of the year:
Here we’re looking at HDWI (hot dry windy index), where a “loose” definition of fire weather is if HDWI is above 200. HDWI is based on a few factors, namely, how hot it is, how dry it is, and how fast the air is moving. Hot dry air moving quickly = fire weather.
The number of fire weather days per year has been increasing, and in very recent years (the past decade) the rate of change has increased, and become statistically signficant:
So its not a particularly fire prone area, but its getting worse, and its getting worse at a faster rate.
That would be the first part of the analysis I would run. After that, we’d look for historically “anomalous” periods. Its not enough to look at averages; that will wash over important features in the data. We need to look for specific periods where fire weather manifests.
This is another way of thinking about fire risk. Here we’re going to count the amount of time, after 12 hours, that an area is in sustained fire-weather conditions. Basically, a bit of time in bad conditions isn’t the end of the world, but as you stay in fire weather conditions, fire risk increases exponentially (as plants/ fuels continue to dry out).
If I were writing an insurance product for you, I would count the number of events in a given magnitude bucket and give you a risk rating. Here, licking my thumb and sticking it in the air, I would say… “not that bad”.
Much of my work is around modeling in the wilderness urban interface. You picked an almost all wilderness area. Since there are no structures, I cant do the next analysis, but it would looks something like this:
Most of my work is about figuring out what the impacts of wildfire on the built environment are going to be. Also, the free structure dataset I have access to doesn’t cover Canada and I’m not going to spend money buying the structures for you (unless you REALLY want me to).
Those first figures are all specific to the coordinates you provided. The final figure is just an example.
Phwoar.
Can I subscribe to your AI posts?