Toqi Tahamid Sarker
  • About
  • Publications
  • Projects
  • Resume
  • Blog
  • About
  • Papers
  • Projects
  • Resume
  • Blog
© 2026 Toqi Tahamid Sarker
GitHubLinkedInGoogle ScholarX (Twitter)EmailResume
EntriesLinksNotesQuotesTILsGuides

#llm

6 posts tagged with “llm” · all posts

The Idea File: Why LLM Agents Change How We Share Work(x.com)

Karpathy's follow-up to a viral tweet. The argument: now that agents can write the code, what you actually want to share is the idea, not the implementation. He calls it an "idea file": a short spec of what to build, nothing more. Worth thinking about for anyone who shares research tools or scripts with collaborators.

Link#Apr. 4, 2026/llm,agents,research,productivity

Using LLMs to Build Personal Research Knowledge Bases(x.com)

Karpathy on using LLMs to build knowledge bases from your own reading, papers, and notes instead of asking the model to recall from its training data. For niche research areas, the model just doesn't know enough. You feed it your corpus, and it becomes a reference you can actually interrogate. For fields like precision agriculture or remote sensing where survey coverage is thin, this is genuinely practical.

Link#Apr. 2, 2026/llm,research-tools,productivity,knowledge-management

LLM Architecture Gallery: Every Major Architecture in One Place(sebastianraschka.com)

Sebastian Raschka collected architecture diagrams for most of the major LLM families in one place: GPT, BERT, T5, LLaMA variants, Mistral, Gemma. When a paper says it builds on LLaMA-2 with GQA and you want to know what that actually looks like, this is faster than digging through GitHub readmes.

Link#Mar. 15, 2026/llm,transformers,architecture,learning-resources

Everything We Learned About LLMs in 2025: Simon Willison's Annual Roundup(simonwillison.net)

Simon Willison's annual LLM recap, this time 26 sections long. Covers reasoning models, multimodal, tool use, agents, fine-tuning, inference efficiency, safety, and open weights. He's been doing this for three years so there's a lot of accumulated context. Don't read it straight through. Pick a section from the table of contents and start there.

Link#Dec. 31, 2025/llm,ai,year-in-review,research

Reading Research Papers with a 3-Pass LLM Method(x.com)

Andrej Karpathy's reading habit: three passes through anything worth understanding. First pass is manual. Second, ask the LLM to explain and summarize. Third, Q&A on the parts that didn't land. He says he comes away with noticeably better understanding than if he'd just moved on. Works especially well for CV papers where the actual contribution is buried under five pages of related work.

Link#Nov. 18, 2025/research,llm,productivity,academic

What We Know About AI Agents: A 264-Page Survey from Meta, DeepMind, Stanford(arxiv.org)

A 264-page survey on AI agents from researchers at Meta, Yale, Stanford, Google DeepMind, and Microsoft. Covers memory, planning, tool use, multi-agent coordination, and evaluation. If you only read one chapter, make it the one on evaluation. That's where most agent benchmark claims quietly stop making sense.

Link#Apr. 5, 2025/agents,llm,research,survey

Tags

productivity7
ai6
llm6
research5
tools4
deep-learning4
learning-resources4
claude-code3
phd3
transformers2
agents2
research-tools2
academic2
memory1
computer-vision1
knowledge-management1