• 0 Posts
  • 4 Comments
Joined 3 years ago
cake
Cake day: June 17th, 2023

help-circle
  • No. The issue is that an assumption they make in the unsafe block does not actually always hold true. They changed the safe rust code to strenghten the (incorrect) assumption they made in the first place, because that is way easier than rearchitecting the unsafe part. I.e. if the unsafe part was somehow to be written safely, the mitigation they introduced now would not result in any difference in behaviour, it would be correct behaviour both before and after.

    Tldr: the problem lies in the unsafe part


  • It’s my turn to do the obligatory mention of SourceHut :)

    It is in alpha, but it is really promising. It is going all-in on email based git workflows (which was the original way of doing it before the github-style PR based workflow). I love the style and it’s minimalism - but don’t let that fool you, it has many features that you might not see at first glance. Imagine if cgit or gitweb was extended into a software forge with built-in support for email patches, mailing lists, issue tracking and CI.

    If you are the type of person who attracts garbage issue tickets and often has to reject low-effort PRs on your projects, it forces a really good minimum entrance bar. Of course this comes at the cost of visibility of your projects, less networking effect, so I would suggest to not use it if you want easy visibility and 3rd party contribution on your projects.


  • No, I think you misinterpeted (or the original commenter was not specific egough) what black box refers to here. I don’t mean that they are proprietary or trained in a private/secret way, I mean the model itself is so huge and impossible to understand, that it is basically a black box. There are millions and billions of connections and parameters that are not adhering to any well defined structure, they just came to form magically by the learning process. You look at a neural network and you have absolutely no idea why it works.

    This is one of the biggest challenges of bringing AI into the automotive industry for example. A neural network by itself is not certifiable due to not being able to prove that it works. I heard about a new-ish field that is trying to engineer structured networks specifically for automotive and similar applications, but havent heard anything since, and can’t find an article for it on Wikipedia.

    EDIT: took a few more minutes to find :) Neural Circuit Policies is the search term for those who are interested in an attempt to get closer to certifiable AI