beefnugs 4 days ago

Also understand how they can be used for unique hard to detect "Scams" like MaidSafe. "We came up with a brand new amazing way to dht! like ants!"

This all sounded technically interesting and useful, and only after ingesting huge amounts of all the technical details do you realize the whole plan was to force everyone to re-write all internet applications over again (dumb) and then it evolved into some slow scam where after 10 years or something they pretend its still being worked on, but nothing ever finishes.

  • anacrolix 4 days ago

    No, that's because DHTs already work but nobody actually wants them. Then you pretend there's a technical reason to work on DHTs more because you can't deliver your main product.

extragalaxial 5 days ago

[flagged]

  • hansmayer 4 days ago

    Please, please avoid recommending LLMs for problems where the user cannot reliably verify it's outputs. These tools are still not reliable (and given how they work, they may never be 100% reliable). It's likely the OP could get a "summary" which contains hallucinations or incorrect statements. It's one thing when experienced developers use Copilot or similar to avoid writing boilerplate and boring parts of the code - they still have competence to review, control and adapt the outputs. But for someone looking to get introduced to a hard topic, such as the OP, it's a very bad advice as they have no means of checking the output for correctness. A lot of us already have to deal with junior folks spitting out the AI slop on a daily basis, probably using the tools they way you suggested. Please don't introduce more of AI slop nonsense into the world.

  • Asraelite 5 days ago

    This is getting downvoted but I would also recommend it. It's much faster than reading papers and, unless you are doing cutting edge research, LLMs will be able to accurately explain everything you need to know for common algorithms like this.

    • hansmayer 4 days ago

      It's getting down-voted because it is a very bad advice, one that can be refuted by already known facts. Your comment is even worse in this regards and is very misleading - the LLMs are definitely not going to "accurately explain everything you need to know", it's not a magical tool that "knows everything", it's a statistical parrot which infers the most likely sequence of tokens, which results in inaccurate responses often enough. There is already a lot of incompetent folks relying blindly on these un-reliable tools, please do not introduce more AI-slop based thinking into the world ;)

      • Asraelite 4 days ago

        You left out the "for common algorithms like this" part of my comment. None of what you said applies to learning simple, well-established algorithms for software development. If it's history, biology, economics etc. then sure, be wary of LLM inaccuracies, but an algorithm is not something you can get wrong.

        I don't personally know much about DHTs so I'll just use sorting as an example:

        If an LLM exlains how a sorting algorithm works, and it explains why it fulfills certain properties about time complexity, stability, parallelizability etc. and backs those claims up with example code and mathematical derivations, then you can verify that you understand it by working through the logic yourself and implementing the code. If the LLM made a mistake in its explanation, then you won't be able to understand it because it's can't possibly make sense; the logic won't work out.

        Also please don't perpetuate the statistical parrot interpretation of LLMs, that's not how they really work.

        • hansmayer 4 days ago

          I meant it also for the (unwittingly) left-out part of your comment. Firstly, by saying this parrot will explain "everything that you need to know ..." you're pushing your own standards onto everyone else. Maybe the OP really wants to understand it deeply and learn about edge cases, and understand how it really works. I dont think I would rely on a statistical parrot (yes, that's really how they work, only on a large scale) to teach me stuff like that. At best, they are to be used with railguards as some kind of a personal version of "rain man", with the exception that the "rain man" was not hallucinating when counting cards :)

        • timothygold 4 days ago

          > Also please don't perpetuate the statistical parrot interpretation of LLMs, that's not how they really work.

          I'm pretty sure that's exactly how they work.

          Depending on the quality of the LLM and the complexity of the thing your asking about good luck fact checking it's output. It is about the same effort as finding direct sources and verified documentation or resources written by humans.

          LLMs generate human like answers by using statistics and other techniques on a huge corpus. They do hallucinate but what is less obvious is that a "correct" LLM output is still a hallucination. It just happens to be a slightly useful hallucination that isn't full of BS.

          As the LLM takes in inconsistent input and always outputs inconsistent output you * will * have to fact check everything it says. Making it useless for automated reasoning or explanations and a shiny turd in most respects.

          The useful things LLMs are reported to do where an emergent effect found by accident by natural language engineers trying to build chat bots. LLM's are not sentient and have no idea if the output is good or bad.

    • sky2224 5 days ago

      It's getting downvoted because it's the equivalent of saying "google it".

      • stevekemp 4 days ago

        And because LLMs will "explain" things that contain outright hallucinations - a beginner won't know which parts are real and which parts are suspect.

        • hansmayer 4 days ago

          Exactly this. The thing which irritates and worries me, is that I notice a lot of junior folks tend to try and apply these machines in solving open-ended problems the machines don't have the context for. The lawsuits with made-up referent cases are just the beginning I am afraid, we're in for a lot more slop endangering our services and tools.

      • cpach 4 days ago

        Exactly. Nothing wrong with LLMs, but we’re trying to have a human conversation here – which would be impossible if people would have all their conversations with LLMs instead.