5 comments

  • bob102945 minutes ago
    The biggest problem with recurrent spiking neural networks is searching for them.<p>Neuromorphic chips won&#x27;t help because we don&#x27;t even know what topology makes sense. Searching for topologies is unbelievably slow. The only thing you can do is run a simulation on an actual problem and measure the performance each time. These simulations turn into tar pits as the power law of spiking activity kicks in. Biology really seems to have the only viable solution to this one. I don&#x27;t think we can emulate it in any practical way. Chasing STDP and membrane thresholds as some kind of schematic on AI is absolutely the wrong path.<p>We should be leaning into what our machines do better than biology. Not what they do worse. My CPU doesn&#x27;t have to leak charge or simulate any delay if I don&#x27;t want it to. I can losslessly copy and process information at rates that far exceed biological plausibility.
  • RaftPeople1 day ago
    From article:<p>&gt; <i>Cause and Effect: If Neuron A fires just a few milliseconds before Neuron B, the brain assumes A caused B. The synapse between them gets stronger.</i><p>A recent study from Stanford found that it&#x27;s more complex than this rule, some synapses followed it, some did the opposite, etc.
    • kevlened1 hour ago
      &gt; A recent study from Stanford<p>Source?
  • mike_hearn1 day ago
    I guess the obvious question is whether something that mimics biology closer is actually useful. Computers are useful exactly because they aren&#x27;t the same as us. LLMs are useful because they aren&#x27;t the same as us. The goal is not to be as close to biology as possible, it&#x27;s to be useful.
    • 9wzYQbTYsAIc1 hour ago
      Neural networks have turned out to be pretty useful. The goal of distributed parallel processing wasn&#x27;t to recreate the brain but to recreate it&#x27;s capabilities.
  • 7777777phil2 days ago
    Neuromorphic chips have been 5 years away for 15 years now.. Nevertheless the Schultz dopamine-TD error convergence is one of the coolest results in neuroscience
  • geremiiah1 hour ago
    Interesting topic, but why am I reading an LLM generated summary?
    • voidUpdate15 minutes ago
      &gt; &quot;If you’ve been following my recent posts on Metaduck, you know I spend my days building infrastructure for AI agents and wrangling LLMs into production&quot;<p>Because LLMs users use LLMs for everything