Enoril

  • 0 Posts
  • 136 Comments
Joined 2 years ago
cake
Cake day: August 10th, 2023

help-circle



  • Because, even today, you can’t and will never have a 100% reliable answer.

    You need to have at least 2 different validators to reduce the probability of errors. And you can’t just say, let’s run this check twice by AI as they will have the same flaw. You need to check it with a different point of view (being in term of technology or ressource/people).

    This is the principle we apply in aeronautics since decades, and even with these layers of precautions and security, you still have accident.

    ML is like the aircraft industry a century ago, safety rules will be written with the blood of the victims of this technology.



  • With all this shit going on, I’m happy to see him smile.

    First, you could see from his body language that he was worried to have to delay his call (could have been a critical information), hopping it would not upset Macron, then the relief and big smile when Emmanuel told probably “Yes, sure buddy. take your time”

    I have a lot of respect for Zelenskyy because you need resilience to handle the crazy pressure and responsibility he encounter everyday.

    I really hope he will survive this mess, not killed by a FSB or CIA agent/coup.








  • Making headlines is not a proof of quality. It’s just the latest buzz word. You should be less influenced by trends but more by real results.

    Btw, I’ve participated to this kind of summit, even as speaker. These events are more a marketing and lobbying tool for consultant firms than being a real breakthrough event on the technology.

    They did the same for the sovereign cloud years ago. Lot of money (our taxes) given, fancy events, fancy speeches. Concrete results: still waiting.

    And yes, this ML training already show it’s limitations (hence the thing of the past remark). Until recently, you could improve the quality of the answer by providing more training data. But now, they’ve reached the limit as no more data can be given.

    It’s just a matter of time before the bubble explode.


  • Sorry but no.

    It’s good when what you are trying to do has been done in the past by thousand of people (thanks to the free training data). But it’s really bad for new use case. After all it’s a glorified and expensive auto-complete tool trained on code they parsed. It’s not magic, it’s math.

    But you don’t get intelligence, creativity from these tools. It’s math! Math is the least creative domain on earth. Since when being a programmer is just typing portion of code from boilerplate / examples from internet?

    It’s the logical thinking, taking into account all the parameters and constraints, breaking problems into piece of code, checking it, testing it, deploying it, supporting it.

    Ok, programming goal is to solve a problem. But usually not all the parameters of the problem can be reduced to its mathematical form.

    IA are far from being able to do that and the ratio gain/cost is not proven at all. These companies are so committed to AI (in term of money invested) that THEY MUST make you use their AI products, whatever its quality. They even use a marketing term to hide their product bad answer: hallucinations. Hallucination is just a fancy word to not say: totally wrong.

    Do you find normal to buy a solution that never produces 100% good results (more around 20% of failure)?

    In my industry, this IA trend (pushed mainly from managers not knowing what really is programming and of course “AI”) generate a lot of bad quality code from our junior devs. And it’s not something i want to push in production.

    In fact, a lot of PoC around ML never goes from the R&D phase to the real production. It’s too risky for the business (as human life could be impacted).




  • Ask the AI to answer something totally new (not matching any existing training data) and watch what happen… It’s highly probable that the answer won’t be logical.

    Reasoning is being able to improvise a solution with provided inputs, past experience and knowledge (formal or informal).

    AI or should i say Machine Learning are not able to perform that today. They are only mimicking reasoning.




  • You have normally 2 segregated electrical system (1 & 2) with, for each system, several sub-segregation (like primary, secondary, essential, secours with bus bars, contactors that can cut some non-essential systems depending on rules or switch in the overhead panel) and several sources of power (engines, apu, batteries, sometime rat).

    Black boxes don’t have battery (to dangerous, the battery could destroy the recordings when damaged and that would also require specific maintenance) but normally they have several power source. Loosing power like that is strange and could indicate a fire or a maintenance problem (on board batteries should be able to work for at least 40min without engine… but they had a running engine as far as i know… that doesn’t make any sense).

    APU can be run while flying, you must be below a certain flight level to use it (<FL100?).