Skip to content

Machine learning resources


Online resources : πŸ“‘ dataloop | πŸ€— huggingface models & papers


Coding assistants : πŸ“‘Windsurf vs-code plugin πŸ“‘Refact.ai πŸͺ„codeconvert

APIs : πŸ›°οΈAnthropic API πŸ›°οΈGoogle GenAI πŸ›°οΈOpenAI-python …

LLMs : πŸ¦™OLlama, 🎀Whisper, πŸ“œGemma Λ€, πŸ“œSpaCy

Light LLMs : litgpt πŸ“œ pythia πŸ“œ TinyLlama πŸ“œ NanoGPT-128MΛ€

Physics : PhysicsNeMo βš›οΈ ( e.g. Fourier πŸͺ¨ darcy_fno

Geophysics : SPADE-Terrain-GAN 🌏 ada_multigrid_ppo

Digital Rock Physics : πŸͺ¨ GeoSlicer

Training :πŸš„ Lookahead πŸš… Gymnasium

Distributed training : NCCL πŸ–§ GLOO πŸ–§ MPI πŸ–§ Λ€

ML deployment : llama.cpp πŸ“¦ Pytorch 💚 onnx πŸ“¦ jax onnx runtime πŸ“¦ ggml 🧊 MLflowΛ€

UI : πŸͺŸ ComfyUI

Agents : ⛓️stanford-oval/storm, πŸ”±ottomator

Datasets : VQAπŸ‘‡ 🌦️ WeatherBench πŸ—ΊοΈ COCOΛ€ πŸ–ΌοΈ FineWebΛ€

Computational costs, based on F.G.Raeini’s MSc, Per batch size of 128, GPU: N100, CPU: AMD 5700U

ModelNum-Paramssize (MB)Inf time (GPU)Train-time CPU!Acc. VQAv1/2,AOK
ViLT82M4702 s25 s72%, 44%
ResAttLSTM802.5 s62%, 30%

Larger models:

GIT: 707MB | Qwen2-72B: 43GB | BLIP: 990MB-1.9GB | Florence-230M | LLaVA-7B: 15GB | LAVIS


VQA datasets:

datasetVQA-v2VQA-v1AOK-VQAVizWiz
train, val:443k, 214k214k, 121k17.0k, 1.14k200k, 40k Λ€