Yahoo Web Search

Search results

  1. Jan 6, 2024 · I like to train Deep Neural Nets on large datasets. - karpathy

  2. karpathy / nn-zero-to-hero Public. Notifications. Fork 1.2k. Star 10.4k. master. README. MIT license. Neural Networks: Zero to Hero. A course on neural networks that starts all the way at the basics. The course is a series of YouTube videos where we code and train neural networks together.

  3. Jan 20, 2018 · Describing a new pet project that tracks active windows and keystroke frequencies over the duration of a day (on Ubuntu/OSX) and creates pretty HTML visualizations of the data. This allows me to gain nice insights into my productivity. Code on Github. Jul 3, 2014 Feature Learning Escapades

  4. karpathy / llama2.c Public. Notifications. Fork 1.8k. Star 16k. master. README. MIT license. llama2.c. Have you ever wanted to inference a baby Llama 2 model in pure C? No? Well, now you can! Train the Llama 2 LLM architecture in PyTorch then inference it with one simple 700-line C file ( run.c ).

  5. May 7, 2022 · Andrej karpathy. I like to train Deep Neural Nets on large datasets. 53.7k followers · 7 following. Stanford. https://twitter.com/karpathy. karpathy / README .md. I like deep neural nets. Pinned. nanoGPT Public. The simplest, fastest repository for training/finetuning medium-sized GPTs. Python 23.7k 3.1k. micrograd Public.

  6. Neural Networks: Zero to Hero. A course by Andrej Karpathy on building neural networks, from scratch, in code. We start with the basics of backpropagation and build up to modern deep neural networks, like GPT.

  7. Chapter 1: Real-valued Circuits. In my opinion, the best way to think of Neural Networks is as real-valued circuits, where real values (instead of boolean values {0,1}) “flow” along edges and interact in gates. However, instead of gates such as AND, OR, NOT, etc, we have binary gates such as * (multiply), + (add), max or unary gates such as ...

  1. People also search for