Logo
  • News
  • Newest
  • Ask
  • Show
  • Jobs

Ask HN: Relatively SoTA LLM Agents from Scratch?

4 points by solsane 3 days ago | 3 comments

  • huevosabio 3 days ago
    The Olmo team is AFAIK the only SOTA-ish model that has fully open source code and data. Their report is fantastic: https://www.datocms-assets.com/64837/1763662397-1763646865-o...

    It should give you an idea of how hard it is to do a SOTA model from scratch!

    If you relax the SOTA aspect, Karpathy's nanochat has you covered: https://github.com/karpathy/nanochat

  • bjourne 3 days ago
    Read this article: https://dl.acm.org/doi/10.1145/3712285.3759827 Training algorithms are relatively simple (base training, fine-tuning, RL), but the scale is critical. I.e., the engineering infrastructure. The authors recommend a 128 GPU cluster minimum and many petabytes of training data.
  • walpurginacht 3 days ago
    I'd suggest you take a read on HuggingFace's writeup when they trained smolLM3

    https://huggingface.co/spaces/HuggingFaceTB/smol-training-pl...

    rare detailed insight on the entire process