wow. was expecting to agree heavily with the article, but it felt like reading intangible fluff. is that just me?
when they do talk directly about the issue, they say things like " AI models that scrape artists’s work without compensation" which is not how i would phrase ‘actual concerns.’ no mention of things like “models specifically built to manipulate the general populace.” which i see still gets no attention. having models learn from the world we live in to create a general use model is not the issue, unless you’re thinking about biases. i’m an artist, and for 20 years i’ve witnessed how terrible artist communities are at understanding things like copywrite and economic imbalance/redirection. this is unfortunate because artists are getting screwed, but the take on it is just wrong. this article doesn’t touch any of it in a meaningful way.
there are definitely issues with alignment and interpretability that need to be understood as we move forward, and that’s where the effort should be focused in these academic settings. if you want to focus on the existential, you should be directly viewing current models and how they could steer towards or away from such threats conceptually, while working on understanding and interpreting said models. we aren’t just giving superpowers to an RLHF narrow model.
at least they mentioned climate change, since climate and economic imbalance are really the main existential risks we currently face. A.I. development is likely the only thing that will help us.
i think melanie mitchell is supposed to be interviewed on MachineLearningStreetTalk soon, and i assume she will have a good take on it.
so, TLDR: we likely won’t have progress without interpretability, and that should in mind while developing better machines. don’t read this article, just wait for the melanie mitchell interview, as i’m sure she’s heated after that frustrating munk debate.
Yeah it annoys me so many people don’t seem to realise AI and automation is pretty much our only real hope of fixing our climate issues. automated construction will allow more efficient buildings which when designed using AI tools can easily incorporate all the newest and best design methods, AI tools likewise will help scientists and engineers design and run experiments which are totally out of the scope of current practice.
wow. was expecting to agree heavily with the article, but it felt like reading intangible fluff. is that just me?
when they do talk directly about the issue, they say things like " AI models that scrape artists’s work without compensation" which is not how i would phrase ‘actual concerns.’ no mention of things like “models specifically built to manipulate the general populace.” which i see still gets no attention. having models learn from the world we live in to create a general use model is not the issue, unless you’re thinking about biases. i’m an artist, and for 20 years i’ve witnessed how terrible artist communities are at understanding things like copywrite and economic imbalance/redirection. this is unfortunate because artists are getting screwed, but the take on it is just wrong. this article doesn’t touch any of it in a meaningful way.
there are definitely issues with alignment and interpretability that need to be understood as we move forward, and that’s where the effort should be focused in these academic settings. if you want to focus on the existential, you should be directly viewing current models and how they could steer towards or away from such threats conceptually, while working on understanding and interpreting said models. we aren’t just giving superpowers to an RLHF narrow model.
at least they mentioned climate change, since climate and economic imbalance are really the main existential risks we currently face. A.I. development is likely the only thing that will help us.
i think melanie mitchell is supposed to be interviewed on MachineLearningStreetTalk soon, and i assume she will have a good take on it.
so, TLDR: we likely won’t have progress without interpretability, and that should in mind while developing better machines. don’t read this article, just wait for the melanie mitchell interview, as i’m sure she’s heated after that frustrating munk debate.
Yeah it annoys me so many people don’t seem to realise AI and automation is pretty much our only real hope of fixing our climate issues. automated construction will allow more efficient buildings which when designed using AI tools can easily incorporate all the newest and best design methods, AI tools likewise will help scientists and engineers design and run experiments which are totally out of the scope of current practice.